Friday, August 19, 2016

[Post Event] ITCamp Summer Community Event, August 18 - Cluj-Napoca

In August 18, 2016 we had IT Camp Summer Community Event in Cluj-Napoca. There were more than 55 people that attended to this event. In comparison with past events, we dedicated more time for networking between the sessions and at the end of the event.
There were two session in the afternnon. The first one was about .NET Security (including .NET Core) and the seccond one was about how we can write actor based concurrency using Elixir. Personally I discover a new word that Elixir has and is available for more than 30 years in Erland world. Seeing a system that has availability with 8 nines (99.99999) is something impressive, but thinking that such a system was availalbe 25 years ago is more impressive.
As usualy a big THANK YOU to all attendies and people that were involved behind the scene. And another THANK YOU to our sponsors that made this event possible:

Evozone
Recognos
You can find below the abstract to sessions and the slides from each of them. At the end of the post you can find pictures from the event.

See you next time!

.NET Security (Radu Vunvulea)
Abstract: When is the last time when you pushed an update to .NET framework in a production environment? Did you define an updated process of .NET framework for production environments? Is the support team aware about what can happen if a security update is not pushed to the machines? 
In this session we will take a look on some security problems that .NET framework has and what can happen if we don't take security into account. We will take a look on different versions of .NET (including .NET Core). Best practives related to this topic will be covered and debated.
Slides:

Actor based concurrency with Elixir
Abstract: Because of increased demand for interconnected systems(IOT, SAAS) we need new improved ways of handling reliable (soft)real-time systems. One battle-tested approach is based on the Actor Model pioneered by Erlang and brought to the masses by Elixir.
Topics that will be covered:
- short introduction to functional programming principles with examples in Elixir
- actor model concurrency: why is it needed, advantages & disadvantages
- OTP patterns for handling agents: GenServer & Supervisor

Tuesday, August 9, 2016

Azure CDN – Solutions for things that are not available (yet)

Last week I had some interesting discussions around payload delivery and CDNs. I realize how easily people can misunderstand some features or functionality, thinking that it is normal or making the wrong assumption.
In this context, I will write 3 posts about Azure CDNs, focusing on what is available, what we can do and what we cannot do.


Let’s see what we can when we need to do when we need the Azure CDN features that are missing in this moment in time. In the last post we talked about this features, we will not focus on explaining them – for this please see the previous post.

SAS Support for Blob Storage
If you need on top of Azure Store, a CDN system that has support for SAS, then you need to be inventive. The simplest solution is to replicate the content in multiple Azure Storage Accounts or even in the same Storage Account (if the problem is download speed) and create a web endpoint (that can be hosted as a web app), that will provide the locations from where the content can be downloaded.
This web endpoint, can have a Round Robin mechanism (the simplest thing) to serve different locations.

Using this approach, you will also need a mechanism that replicates the content in multiple locations. AzCopy might be a good out of the box solution.

HTTPS for Custom DNS Domains
The same store with Azure CDN is with Azure Storage. In this moment we don’t have yet support for HTTPS and Custom DNS Domains. In this case if you really need HTTPS, then the only solution is to serve the content from an Azure Web App. There not to many options for now.

HTTPS Client Certificates
The story is similar with the previous one. We don’t have to many options. The best way for now is to use an Azure Web App and serve your binary content from there.

HTTP/2
None. As above an Azure Web App.

Fallback to custom URI
For this situations, we have a simple solution. We can go with the approach presented in the SAS Support example. The additional thing that we need to do is to have multiple Azure Web Apps that can server the content location and add an Azure Traffic Manager in front of them.


Access logs of CDNs are raw data
If you really need this logs, then you need to use Azure Blob Storage. The above approach and activate logs in each Azure Storage Account.

As we can see, there are so many features support by Azure CDN. Of course we will not be able to find a perfect solution that supports all the things that we need. The same is happening with Azure CDN. But what is important to remember that a part of the features that we are missing are already marked as In Progress in the feedback portal of Azure. This means that soon, we will have support for them also.

Monday, August 8, 2016

ITCamp de vară în Cluj-Napoca, 18 August

For Romanian colleagues.
În luna August, ITCamp organizează un nou eveniment pentru profesioniștii IT din Cluj-Napoca. Evenimentul urmează să aibă loc pe data de 18 August, în clărirea Evozon.
Participarea la eveniment este gratuită. Mulțumim sponsorilor pentru sustinere (Evozon șiRecognos).
Program:
17:45 - 18:00 - Registration (coffee and beverages)
18:00 - 19:00 - .NET Security (Radu Vunvulea)
19:00 - 19:30 - Break (coffee and beverages)
19:30 - 20:30 - Actor based concurrency with Elixir (Adrian Magdas)
20:30 - 21:00 - Networking (coffee and beverages)

Descrierea sesiunilor:
.NET Security (Radu Vunvulea)
When is the last time when you pushed an update to .NET framework in a production environment? Did you defined an updated process of .NET framework for production environments? Is the support team aware about what can happen if a security update is not pushed to the machines?
In this session we will take a look on some security problems that .NET framework has and what can happen if we don't take security into account. We will take a look on different versions of .NET (including .NET Core). Best practives related to this topic will be covered and debated.
Actor based concurrency with Elixir
Because of increased demand for interconnected systems(IOT, SAAS) we need new improved ways of handling reliable (soft)real-time systems. One battle-tested approach is based on the Actor Model pioneered by Erlang and brought to the masses by Elixir.
Topics that will be covered:
- short introduction to functional programming principles with examples in Elixir
- actor model concurrency: why is it needed, advantages & disadvantages
- OTP patterns for handling agents: GenServer & Supervisor
Sponsori:
Evozone
Recognos

Sunday, August 7, 2016

Azure CDN – Things that are not available (yet)

Last week I had some interesting discussions around payload delivery and CDNs. I realize how easily people can misunderstand some features or functionality, thinking that it is normal or making the wrong assumption.
In this context, I will write 3 posts about Azure CDNs, focusing on what is available, what we can do and what we cannot do.


I was involved in many discussions when people assumed that different features are available on Azure CDN and were ready to change the current architecture based on that wrong assumptions. Let’s take a look on some features or functionality that we don’t have on Azure CDNs, but many times we are assuming that we have them.

SAS Support for Blob Storage 
The most common functionality that people think that is available on Azure CDN is Shared Access Signature for Blob Storage. Shared Access Signature (SAS) is one of the most used and powerful feature of Blob Storage. On Azure CDN we have the ability to cache a blob storage using the full
URL including the URL.
Maybe because of this, people have the impression that Azure CDN will take into consideration the SAS validity and once the SAS will expire it will also invalidate the cache.
The reality that in this moment Azure CDN treats an URL to an Azure Blob that has a SAS like just another URL. If the content can be accessed, then it will copy the payload and replicate to the CDNs. The content will be removed from the CDN nodes only in the moment when the TTL value on CDN expires, that doesn’t has any connection with SAS.



HTTPS for Custom DNS Domains
Even if Azure CDN has support for custom DNS and also has support for HTTPS, it doesn’t mean that you can use HTTPS using your own domain name. Sounds strange, but sometimes can be confusing when you have two features supported, but the combination between them is not supported.
This means that if you need HTTPS than you will need to use Azure CDN DNS. The good news is that this is a feature that on Azure Voice is most voted for CDNs and very soon we will have support for it.

HTTPS Client Certificates
When you are using Azure CDNs it is important to remember that even if there is support for HTTPS, there is no support in this moment for client certificates. You are allowed to use only SSL certificates provided by the CDN.
This is why we don’t have support yet for client certificates on custom DNS domains for Azure CDNs, but things will be much better very soon.

HTTP/2
The new protocol of the internet, if we can call it in this way is not yet supported. In general, this is not a blocker, except if you have a complex web application where you want to minimize the number of requests that are done by the client browser.
For this situations, working with Azure CDN might be not so simple, but there are work arounds.

Fallback to custom URI
Some CDNs providers, give us the possibility to specify an URI that can be used as a fallback when the content is no found in the original location. It is useful when you are working with an application where availability time is critical and you need to be able to provide a valid location for all content.
Of course, with a little of imagination you can resolve this problem pretty simple. Putting between Azure CDN and your content Azure Traffic, that would allow you to specify a fallback list of URIs.

Access logs of CDNs are raw data
Many people are requesting that they want to have access to raw data of logs, similar with the one from Azure Storage. This might be an useful feature, but in the same time I have some concerns related to it.
The main scope of an CDN is to cache and serve content that is requested often from a location that is near as possible to the client. It is used for cased when the RPS (requests per second) is very high. Generating a file with raw data, could be expensive and to end up with logs that are big. Processing such files might be expensive.

It is important to remember that a part of this features are already planned by Azure team and near future we might find them in Azure CDN. Also, don’t forget that an out-of-the-box solution is hard to find and with a little of imagination we can find a solution for it.

Wednesday, August 3, 2016

Azure CDN – Available features and functionality

Last week I had some interesting discussions around payload delivery and CDNs. I realize how easily people can misunderstand some features or functionality, thinking that it is normal or making the wrong assumption.
In this context, I will write 3 posts about Azure CDNs, focusing on what is available, what we can do and what we cannot do.

Let’s start with the first topic, and talk about features and functionality that is available now (Q3, 2016) on Azure CDNs.  You will find a lot of useful information on Azure page and I’m pretty sure that you already check it.
In comparison with the last few years, there was a big step that was done by Microsoft. They signed partnerships with Akamai and Verizon (both of them are one of the biggest CDNs providers in the world).

This was a smart choice, because developing and construction your own CDNs is not only expensive, but there is a lot of effort that is required. There are some things that doesn’t make sense to construct by yourself if you don’t need something additionally.
A nice thing here is that you don’t need a dedicated contract with each provider. You only need your Azure Subscription and no custom contract or invoice from Akamai or Verizon.

In general, the CDN features that are provided by them are similar with some small exceptions, that (not) might impact on your business.  Based on the features list, we could say that Akamai is the most basic one and Verizon has some additional features, but both of them are offering the base functionality that is required from a CDN.

The things that are available only on Azure CDN from Verizon are:
      1. Reports related to Bandwidth, Cache status, HTTP error codes and so on.
      2. Restrict access to the content, based on the country/area. Verizon allows you to specify a list of countries (DE, UK) or areas (EU, AP – Asia/Pacific) that can(not) access that content. The filtering is done based on the client IP and is per directory. It is good to remember that this rule is applicable recursive at folder level, there is no support for wildcards and you can specify one rule per folder that can contains multiple countries (areas). Being a recursive rule, if you define a rule one level down, only the rule defined on the sub-folder will apply – simple rule engine.
      3. You can pre-load content in cache. Normally, content is loaded in cache only when it is requested for the first time. Azure CDN from Verizon allows us to specify content that we want to pre-load. Don’t forget that you need to specify exactly the file that you want to pre-load and you can pre-load maximum 10 files per minutes (per profile).  There is a kind of support for regular expression, but each result from it needs to be a relative path to a file.

Now, that let’s see what are the features that are supported by Azure CDN from Akamai and Azure CDN from Verizon:
      1. HTTP support
      2. DDOS protection
      3. Dual Stack support (IPv4 and IPv6)
      4. Query String Cache – how content is cached, when the path is a query and not static. There are 3 options available that allows us to cache each unique query, ignore query string and not cache URLs that contain query strings. The last option is very useful.
      5. Fast purge – Allowing us to clean the cache, even if the TTL (Time To Live) of content didn’t expired yet. The purge can be done from Azure Portal, PowerShell or by REST API.
      6. REST API for CDN configuration (as for all other Azure Services)
      7. Custom domain name support
      8. Caching content from Azure Storage. Access to Azure Storage is done using REST API. In this way we can cache blobs and even Azure Tables (as long as we know that it is a static content).

Azure CDN from Verizon is available in two tires, Standard one and Premium. Of course, except the price there are features that are available only on Premium like:
      1. Real-time CDN Status – Is useful if you need real time data related to bandwidth, number of connection, errors codes and cache status. For CDN, normally, it is nice to have, but you do your job without this kind of information
      2. Advance Reports – Detailed geographical reports related to traffic and different views per day, hour, file, file type, directory and so on.
      3. Customizable HTTP behavior based on your own rule

As we can see, there are a lot of features available for Azure CDNs. For normal use cases, you can use without a problem.

Thursday, July 21, 2016

Plan the project decommission in day 0

We all make mistakes. Sometimes are small with no impact on business continuity, sometimes are bigger and can impact the business.
A few years ago I was involved indirectly in a project where a wrong decision generated not only a large amount of money to be lost, but also still have impact on the current business and there is a high risk to affect the business continuity.
I decided to write this post as a lesson learn for all of us and to try to avoid making the same mistake, especially in this era, where SaaS is the preferred option, buying licenses for existing solution is better and developing your own – without analyzing the impact and what shall be done if you want to change the provider.

Overview
Don’t understand me wrong. I’m a big fun of SaaS and buying then in-house development. From my perspective, this is the right solution. It is the right solution for companies that have the right people around them. People that are aware of dependencies, that the system will become one-time legacy and will need to be replaced – in one world they are doing their job, not only following the KPIs.
There is a rule that says that a Manager, GM or Cxx shall start from the first day to find and prepare the next person that will take his place. The same thing shall happen in software industry, especially when we talking about enterprise and IoT. In this world replacing a component might have a high impact on business continuity, costs and legal (licenses) after 5 or 10 years.     

The story
The story is happening inside a big enterprise that is working with devices and IoT for more than 25 years. They are that kind of company that change the world. Their solutions can be found around the globe and are used by millions of people every day.
Of course, especially in enterprise a device comes with a support package and maintenance. This means that telemetry data needs to be collected, updates needs to send to devices and many more. In one word is IoT, in the way we know today.
As any software solution, you try to reuse it as much as possible. This means that you will try to use the same implementation for all your device landscape. This action is perfect normal and natural. In the end, the flows are similar, usually only the source is different or the information that you collect may be slightly different based on the device capabilities.
Being in the era of IoT, the normal thing that you’ll do is to look around you, identify the current solution and try to find one that suites your needs. Once you identify a platform that fulfill your needs you will buy the license for it. And of course, you’ll start to use it.
In general, such a platform comes with an out of the box solution on the backend and another part that runs on your devices. On top of them you might need to do some customization and voila, you connected your devices to your backend system.



The out of the box solution that comes with the IoT platform can come with different licensing model. The most aggressive once, in my vision is the one where the agent that came with the IoT platform and runs on device is not your property and you are allowed to use it as long us you pay your annual license.
This is what happen in my story. It is one of the evilest dependencies that you can have.

Let’s me explain why it is.

As any software solution, the system has a normal life-cycle. In one way or another you know that this solution will not run forever and you will need in the end to change it. The life-cycle could be 5 years, 10 year or even 15 years. This means that from the begging you need to think about the dependencies that you’ll have in the moment when this period of time ends. You need to ensure that the migration to the new solution could be done with a minimal impact. In this kind of situations, minimal impact will be a big one, but at least, you want to try to avoid human intervention on each device.
The second thing is that if you are working in special industries, laws will not allow you push to the device any software. There is a standard validation process that can take from 6 months to even years. This means that for the devices that you are not in this validation phase, injecting a new software to them will add another 6 months of 2 years. It means that for this period of time you will not be able to push new type of devices to the market.
Another thing is with the devices that are offline, but are running. Removing software from them will be hard. We don’t even want to talk about who you can remove this software from the backup images that you pushed with the devices and are used automatically when the software or OS that runs on device crash…
Al these things are happening because you have on your own devices, that are produced by you, a software component that can be used as long as you are the client of that IoT platform provider.
Once you are removed from the client list, you are required by the contract to remove all their property software.

The moral
It is acceptable to have this kind of piece of software on systems where you have easy access. This kind of dependencies could be pushed to cloud solution, datacenters where you have access but not on the field. The field is a territory where you don’t have access and control.
In field, each device shall run software with a license that allows you to run it after 50 years without having to pay a fee. It is normal to pay a fee for a service, for anything else. But for the agent, for that small piece of code you should buy the right license. You don’t need to be the property of it, but you need to be able to run it or have installed on devices as long as you want.

A good example here is Windows OS. Thing about Windows 98 or Windows 7. You buy the license only one, but you can use it as much as you want. Other versions will allow you to upgrade it (receiving fix and updates) as long as you pay the license. Once you stop to pay the license you are allowed to run it.
An example from IoT world is the agent of IoT Hub that is offered by Microsoft. It is open source, is free and you can use it in any way you want. You can even modify it without problems. What you pay? The IoT Hub backend as a service. But if you don’t use it anymore, nobody will request you to remove the agent from the devices.

In conclusion, try to have a vision of the software life-cycle from the begging. Even the best software will be replaced in the end.

Friday, July 15, 2016

Lesson learned after the Azure bill is 2 times more than normal

Small things make the difference - Azure Billing Alerts 

In the last 3 months a part of the team was blocked with some urgent tasks. Because of this they didn’t had time anymore to review the Azure Services Instances that are running under the development and testing subscription.
The rest of the team didn’t check the costs and/or review the services that are running on Azure. In the context where the people that used to check this were blocked with other tasks generates cost that are 2 time more than the normal ones – even if only half of the team is working in this time period on development.

Of course the best solution was that somebody would check this and the responsibility to be passed to another person, but it didn’t happen. A better approach is to have a team that knows that running services on a cloud provider generates costs and they should all the time review what services they are using and turn off the ones that are not used.
But this didn’t happen of course.

There are 3 lessons learns from this story:

  • Cultivate inside the team - ‘responsibility’. People should all the time care about this thing and do this checks pro-actively (for the resources that were created and used by them).  
  • Identify and have all the time a person that is responsible to check the current bill.
  • Use Azure Billing Alert – It allows us to set an alert (email alert) that is send automatically to one or two emails address in the moment when the threshold

Azure Billing Alerts it’s a preview feature from Azure, that allows us to specify to send an email to maximum 2 custom different email addresses when the bill value reaches a specific value. In this moment this service is free and allows us to configure maximum 5 alerts.
We decided to configure 5 different alerts and different values.  We set a custom message to each alert that specifies that this is a real alert only if the alert is send before a specific day in the month. For example, it is normal to reach 100$ consumption after 4 days, but if after 1 day you already reach 100$ than you might have a problem.

What other ways to prevent this kind of situation do you see?