Skip to main content

Initial RDP connection on a remote machine over Relay Hybrid Connection and Web Sockets - Azure Relay

In one of my latest post I presented how we can establish a PuTTy connection between two different machines through Azure Relay Hybrid Connection using web sockets.
Today, we will take a look over a Node.JS sample that initiate the tunnel that can be used for an RDP connection between two machines over Azure Relay.

Context
Before jumping to the solution let's take a look on RDP Connection Sequence (https://msdn.microsoft.com/en-us/library/cc240452.aspx?f=255&MSPPError=-2147217396).
As we can see in the connection sequence flow, there are multiple connections that are open during a RDP session. The initial session that is open between client and server is used for the initial handshake and credentials validation. Once the credential are validated, the current connection is closed and other socket connection are open automatically.
In the current sample, we will update the original one that was written for Telnet. The first 5 steps from the flow will be supported, until in the moment when the initial socket connection is close and a new one is open.
More information on what we should do after this steps are presented at the end of the post.

Implementation
The implementation is straightforward and similar with the one that was used for PuTTy connection. There are only some small things that we need to take into account.
GitHub Source code: https://github.com/vunvulear/Stuff/tree/master/Azure/relay-hybrid-connections-rdp-initial

Server.js needs to run on the machine that we want to access. In our code, we will need to open the socket and redirect all the content that is send through Azure Relay Hybrid Connection to the local socket.
function (socket) {
    relaysocket = socket;
    console.log('New connection from client');
    relaysocket.onmessage = function (event) {
      // Send data to local socket (local port)
      myLocalHost.write(event.data);
    };
    relaysocket.on('close', function () {
      console.log('Relay connectin was closed');
    });       
  });

The second step needs to be done on our local socket, where is necessary to redirect the content from our local socket to our web socket.
net.createServer(function(localsocket)
{
  myLocalHost = localsocket;
  myLocalHost.on('data', function(d) {
    relaysocket.send(d);
    myLocalHost.on('error', function(err) {console.log("Socket close: " + err.stack)});
  });
}).listen(localport);

On the other machine, where we run client.js we will do a similar thing. Listen to web socket that is communication with Azure Relay Hybrid Connection and redirect content to local port and redirect all content from the local port to our web socket.
var relayclient = webrelay.relayedConnect(
        webrelay.createRelaySendUri(ns, path),
        webrelay.createRelayToken('http://'+ns, keyrule, key),
        function (socket) {
            // Create local socket to the given port                               
            console.log("Connected to relay")
            relayclient.onmessage = function (event) {
                if(typeof localsocket === "undefined")
                {
                    localsocket = net.connect(sourceport,function(socket)  {
                            console.log("New socket");
                        });
                    localsocket.on('data', function(data) {
                        relayclient.send(data);
                    });
                    localsocket.on('error', function(err) {console.log("Socket close: " + err.stack)});
                }
                localsocket.write(event.data);
            };            
        }
    ); 

Next steps
What if we would like to extend the current solution to be able to do a full RDP connection over Azure Relay Hybrid Connection? There are two clear steps that need to be done.

1. Support multiple connection
We shall extend client.js and server.js to be able to send through the web sockets multiple socket connections. This would required that on one side to mark each package that we send over Azure Relay with a flag that would allow us to know on the other side on what socket we need to redirect the content.

2. Buffering
Even if the solution will work, it is pretty clear that we need a buffering mechanism that is able to stream uniformly all content that is send over Azure Relay. If we would have only one connection open, this would not be necessary. Having multiple open connection that goes over the same web socket, then it is required to have a buffering  mechanism.
Without it, the solution will work, but the connection will not be stable enough.

Conclusion
Yes, it is possible to tunnel a RDP connection over Azure Relay. We have all the functionality and tools available already. The support for multiple connection and buffering are two features that are necessary for any kind of remote connection that we will want to establish.
Once we will do this, we will be able to tunnel a VNC or a FTP connection without any problems.

Comments

  1. Works great! Thank you for the good work.

    ReplyDelete
  2. am working on RDP(Remote Desktop Sharing) project in which the remote desktop and client are firewall protected , so one can not directly use VNC viewer and VNC connect directly as the devices doesn't have their public IP exposed , I came to know such kind of situation is handled using Service Relay (Azure) which act as a tunnel between the devices to allow remote sharing .It will be very thankful if somebody can help me in solving me this situation of Sevice relay,by providing some useful links of the projects ,or such concepts ,aur any site that had implemented this logic .Thanks in advance .I am using c/c++ for implementation

    ReplyDelete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP