Skip to main content

IoT Home Automation | Device tracking capabilities

In the last post, I talked about how I connected the yard gates to the IoT Home solution that I started to develop. Unfortunately, I was not able yet to connect the Paradox alarm system to the solution. It’s not clear for me how to connect the relay as a keyswitch zone, but I hope next weeks to receive some help and resolve the near future.

Why?
Because the WiFi connection is not stable, there are times when I lose the connection to the devices. This can be annoying, especially because I do not have yet a tracking mechanism that can provide me information related to the current device state.
I decided to enhance my solution with tracking capabilities. Until now I didn’t add any logs to the solution because I wanted to see exactly where and what kind of data I should collect. Easily you can add logs capabilities to the system and end up with a bunch of logs that you don't need it.

What?
In day to day use I observed that I need the following information:

  • When was the last time when the device was online and checked if new commands are available for him
  • When was the last time when a commanded was received by the device
  • What was the last command that was received by device
  • The current device status (e.g. gate is close/open).

Even if I don’t have the physical capability to read the device status, I already both some sensors that I want to integrate into the system that will allow me to know if the gate is open or close.

How?
The tracking capabilities can be implemented on the backend, without requiring a firmware update to the ESP8266. In the moment when the device is checking for a new command I can directly update all the tracking data.
Tracking data is stored inside an Azure Table, where each device is represented by a separate entity (row). I’m using as Partition Key the device type (in this moment I have only gates) and as Row Key I’m using the device id. Each time when the device is checking for new commands I’m updating the Azure Table.

Once I have all the tracking information inside the Azure Table we can fetch them to the web application.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
    public class DeviceStatus : TableEntity
    {
        public DeviceStatus()
        {
            LastNormalCheck = new DateTime(2000, 1, 1, 0, 0, 0);
            LastCommandReceived = new DateTime(2000, 1, 1, 0, 0, 0);
        }

        public string DeviceType
        {
            get => PartitionKey;
            set => PartitionKey = value;
        }

        public string DeviceId
        {
            get => RowKey;
            set => RowKey = value;
        }

        public DateTime LastNormalCheck { get; set; }
        public string CurrentState { get; set; }
        public DateTime LastCommandReceived { get; set; }
        public string LastCommandType { get; set; }

        public override string ToString()
        {
            string result =
                $"Device Type: '{DeviceType}' | Device Id: '{DeviceId}' | State: '{CurrentState}' | Last Check: '{LastNormalCheck}' | Last command time: '{LastCommandReceived}' | Last command: '{LastCommandType}'";
            return result;
        }
    }

Don't forget that when you are using DateTime inside Azure Tables, the minimal accepted value is year 1601. Taking this into account, don't forget to set a default value higher than this, because in C# default value for datetime is year 1.
To not complicate the things, I'm using ToString print data inside the web interface.




Should I do any optimizations
Even if Azure Table it’s a cheap storage, you need to take into account that each device will update the state every 2 seconds. This is the time interval when each device checks for new data. This means that for 3 devices there will be around 10.000 operations per hour, that are equivalent to 7M transactions per month.
The price for 10.000 transactions is €0.000304, meaning that we will pay around €0.21 for the transactions that are executed on top of our system. For now it doesn’t make sense to do any kind of optimization to the system.
In the future, we might want to update the last online field only each minute or when the device receives a command. In this way, we would reduce the number of transactions 30X times from 7M to around 0.23M transactions per month. I might to do this optimization when I will have some free time, but for now it doesn’t make sense.

Conclusion
In this moment I have a simple tracking mechanism in-place that is allowing me to track device behavior. For the development phase this solution will work, but once I'll finish the development I'll need to redesign the tracking solution because the gateway will run inside the house and I want to control how often I do requests outside the house (e.g. to Azure Tables).

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see

Navigating Cloud Strategy after Azure Central US Region Outage

 Looking back, July 19, 2024, was challenging for customers using Microsoft Azure or Windows machines. Two major outages affected customers using CrowdStrike Falcon or Microsoft Azure computation resources in the Central US. These two outages affected many people and put many businesses on pause for a few hours or even days. The overlap of these two issues was a nightmare for travellers. In addition to blue screens in the airport terminals, they could not get additional information from the airport website, airline personnel, or the support line because they were affected by the outage in the Central US region or the CrowdStrike outage.   But what happened in reality? A faulty CrowdStrike update affected Windows computers globally, from airports and healthcare to small businesses, affecting over 8.5m computers. Even if the Falson Sensor software defect was identified and a fix deployed shortly after, the recovery took longer. In parallel with CrowdStrike, Microsoft provided a too