Skip to main content

IoT Home Automation | Device tracking capabilities

In the last post, I talked about how I connected the yard gates to the IoT Home solution that I started to develop. Unfortunately, I was not able yet to connect the Paradox alarm system to the solution. It’s not clear for me how to connect the relay as a keyswitch zone, but I hope next weeks to receive some help and resolve the near future.

Because the WiFi connection is not stable, there are times when I lose the connection to the devices. This can be annoying, especially because I do not have yet a tracking mechanism that can provide me information related to the current device state.
I decided to enhance my solution with tracking capabilities. Until now I didn’t add any logs to the solution because I wanted to see exactly where and what kind of data I should collect. Easily you can add logs capabilities to the system and end up with a bunch of logs that you don't need it.

In day to day use I observed that I need the following information:

  • When was the last time when the device was online and checked if new commands are available for him
  • When was the last time when a commanded was received by the device
  • What was the last command that was received by device
  • The current device status (e.g. gate is close/open).

Even if I don’t have the physical capability to read the device status, I already both some sensors that I want to integrate into the system that will allow me to know if the gate is open or close.

The tracking capabilities can be implemented on the backend, without requiring a firmware update to the ESP8266. In the moment when the device is checking for a new command I can directly update all the tracking data.
Tracking data is stored inside an Azure Table, where each device is represented by a separate entity (row). I’m using as Partition Key the device type (in this moment I have only gates) and as Row Key I’m using the device id. Each time when the device is checking for new commands I’m updating the Azure Table.

Once I have all the tracking information inside the Azure Table we can fetch them to the web application.

    public class DeviceStatus : TableEntity
        public DeviceStatus()
            LastNormalCheck = new DateTime(2000, 1, 1, 0, 0, 0);
            LastCommandReceived = new DateTime(2000, 1, 1, 0, 0, 0);

        public string DeviceType
            get => PartitionKey;
            set => PartitionKey = value;

        public string DeviceId
            get => RowKey;
            set => RowKey = value;

        public DateTime LastNormalCheck { get; set; }
        public string CurrentState { get; set; }
        public DateTime LastCommandReceived { get; set; }
        public string LastCommandType { get; set; }

        public override string ToString()
            string result =
                $"Device Type: '{DeviceType}' | Device Id: '{DeviceId}' | State: '{CurrentState}' | Last Check: '{LastNormalCheck}' | Last command time: '{LastCommandReceived}' | Last command: '{LastCommandType}'";
            return result;

Don't forget that when you are using DateTime inside Azure Tables, the minimal accepted value is year 1601. Taking this into account, don't forget to set a default value higher than this, because in C# default value for datetime is year 1.
To not complicate the things, I'm using ToString print data inside the web interface.

Should I do any optimizations
Even if Azure Table it’s a cheap storage, you need to take into account that each device will update the state every 2 seconds. This is the time interval when each device checks for new data. This means that for 3 devices there will be around 10.000 operations per hour, that are equivalent to 7M transactions per month.
The price for 10.000 transactions is €0.000304, meaning that we will pay around €0.21 for the transactions that are executed on top of our system. For now it doesn’t make sense to do any kind of optimization to the system.
In the future, we might want to update the last online field only each minute or when the device receives a command. In this way, we would reduce the number of transactions 30X times from 7M to around 0.23M transactions per month. I might to do this optimization when I will have some free time, but for now it doesn’t make sense.

In this moment I have a simple tracking mechanism in-place that is allowing me to track device behavior. For the development phase this solution will work, but once I'll finish the development I'll need to redesign the tracking solution because the gateway will run inside the house and I want to control how often I do requests outside the house (e.g. to Azure Tables).


Popular posts from this blog

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Entity Framework (EF) TransactionScope vs Database.BeginTransaction

In today blog post we will talk a little about a new feature that is available on EF6+ related to Transactions.
Until now, when we had to use transaction we used ‘TransactionScope’. It works great and I would say that is something that is now in our blood.
using (var scope = new TransactionScope(TransactionScopeOption.Required)) { using (SqlConnection conn = new SqlConnection("...")) { conn.Open(); SqlCommand sqlCommand = new SqlCommand(); sqlCommand.Connection = conn; sqlCommand.CommandText = ... sqlCommand.ExecuteNonQuery(); ... } scope.Complete(); } Starting with EF6.0 we have a new way to work with transactions. The new approach is based on Database.BeginTransaction(), Database.Rollback(), Database.Commit(). Yes, no more TransactionScope.
In the followi…

GET call of REST API that contains '/'-slash character in the value of a parameter

Let’s assume that we have the following scenario: I have a public HTTP endpoint and I need to post some content using GET command. One of the parameters contains special characters like “\” and “/”. If the endpoint is an ApiController than you may have problems if you encode the parameter using the http encoder.
using (var httpClient = new HttpClient()) { httpClient.BaseAddress = baseUrl; Task<HttpResponseMessage> response = httpClient.GetAsync(string.Format("api/foo/{0}", "qwert/qwerqwer"))); response.Wait(); response.Result.EnsureSuccessStatusCode(); } One possible solution would be to encode the query parameter using UrlTokenEncode method of HttpServerUtility class and GetBytes method ofUTF8. In this way you would get the array of bytes of the parameter and encode them as a url token.
The following code show to you how you could write the encode and decode methods.