Skip to main content

Should we trust a services for our communication channel or should we use our own channel and protocol?

Nowadays there are a lot of mechanism that can be used to connect devices from the field to our own system and infrastructure. If we are looking at the services that are offered by cloud providers, we will discover that each cloud provider one or more solution that can be used at a messaging or transport platform between our devices and backed.
For example, if we take Microsoft Azure, we have Storage Queues and Service Bus ‘Suite’ (that contains Topics, Queues, Relay and Event Hub). Looking at an offered like this, we ask our self what service should be used. In the end, all this services are extraordinary and can bring value to our solution.

In this post we will debate about a simple question that I heard a lot of times in the last 2 years in different meetings and/or project:
Should we trust a services for our communication channel or should we use our own channel and protocol? 

Before jumping into the discussion let’s elaborate the question a little more. The communication between two different endpoint (device and backed) can be done over HTTP(s) or TCP. This can be done by defining our own protocol and contracts OR we can rely on an external provider that would offer all the infrastructure that we need.

In the seccond case, we have control to the content that is send over the wire, but a part of the contract is defined by the external provider that created the communication channel.
Let’s take for example a messaging system like Azure Service Bus Topic. Microsoft Azure provides us a REST API that can be used to send messages over the wire. Amazon SNSis another services provided by AWS that offers us something similar.
All this services are great, work as expected, but what is happening if we create a system that will be develop in 2 or 3 years, will be in a testing phase for 1 year and will need to run for 10 years… We need to be able to provide a robust solution that can run for ~14 years without having to change it too much.
Why we need something like this. Because is pretty hard and complex to change/modify the communication channel. In a regulatory field like banking or life science this will required a lot of time and effort. On top of this, all the time we will have some machines or devices where the update was not installed successfully and we need human intervention – extra costs.

In this moment there is no cloud providers that will guaranty to you that the current service API will not be change (or braking change) in the next 2 or 5 years. In 5 years for example they could be at v8 and v1 will not be supported anymore. Or the authentication mechanism that is used will be outdated and will be discontinue.
This is a risk that appears when we need SaaS (Software as a Services). In this cases we don’t have any kind of control to the service that we consume. For example Microsoft will announce us with at least year before a breaking change will be made to the API.
The good part about this is that almost all cloud providers in this moment are backward compatibility. All of them are supporting the old version of their APIs even if are deprecated or without the last features that were added in the last period of time.
On the other hand, working with a REST API or system hosted by us will give us the full control of what kind of API we expose, what is the contract and so on. Even if we are hosting the machines that exposes the API in cloud, we fully manage the API and control this. In this way we can support the old API as long as needed.
A solution like this will come automatically with extra costs – development, maintenance and support.

A decision between this two solutions is pretty hard to take and is not simple. We need to take into account what kind of clients will access our endpoints, what is the costs of a custom solution and for how our solution will be used. Of course we need to think about how easy a change can be pushed into the system.
In the last months, we had to take a decision like this. We took into consideration the fallowing parameters:

  • For how long the solution should run without major changes: long time (8-10 years)
  • How easy a major change can be pushed into the system: very hard
  • In a case of an update fail how critical is to be able to reconnect to the device: high
  • How expensive is an API change: high
  • Are we working in a regulatory field: yes
  • Is there a high risk after an update to lose the connection with the devices: yes


Based on this risks and the information that we have in this moment we decided to expose our own API and manage all the communication by ourselves. We decided to expose a REST API to the public endpoint. Of course behind this API we have the messaging systems that handles all the communication. The REST API is like a wrapper over the messaging system as SaaS. In this way we can fully manage and control the contracts between backend and devices. It is more simple to update a components that is deployed on 10 or 100 servers in comparison with a client application that is deployed on 500.000 devices.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

[Post-Event] Codecamp Conference Cluj-Napoca - Nov 19, 2016

Last day I was invited to another Codecamp Conference, that took place in Cluj-Napoca. Like other Codecamp Conferences, the event was very big, with more than 1.000 participants and 70 sessions. There were 10 tracks in parallel, so it was pretty hard to decide at  what session you want to join.
It was great to join this conference and I hope that you discovered something new during the conference.
At this event I talked about Azure IoT Hub and how we can use it to connect devices from the field. I had a lot of demos using Raspberry PI 3 and Simplelink SensorTag. Most of the samples were written in C++ and Node.JS and people were impressed that even if we are using Microsoft technologies, we are not limited to C# and .NET. World and Microsoft are changing so fast. Just looking and Azure IoT Hub and new features that were launched and I'm pressed (Jobs, Methods, Device Twin).
On backend my demos covered Stream Analytics, Event Hub, Azure Object Storage and DocumentDB.

Title:
What abo…