Skip to main content


Showing posts from 2018

Tampering a flight reservation

In this post, I would like to tackle a day to day problem of airplane travelers – booking security . Epilogue The story began a few days ago when my friend airplane reservation was canceled by the agency that bought the airplane tickets. The request was made to the agency because my friend was too lazy to access the airplane website and cancel the ticket. I question that pop-up in my mind was related to who are the entities and groups that have the right to cancel or to do changes on airplane reservations. I started to research this topic to find out what are the tools and mechanisms that can ensure us that we can secure our reservation from 3rd parties that could do changes on our ticket without our approval. Research  I started from the ground using an existing reservation that I had for a business trip in the US. Each airplane reservation has 3 piece of information that enables you to manage your booking (1) Name – The traveler name (2) Booking reference – An alpha-numeric

(Post Event) Global AI/ML Bootcamp, Cluj-Napoca, 2018

This week we hosted the Global AI/ML Bootcamp event in Cluj-Napoca . We had a day full with hands-on labs on Cognitive Services and bot framework. It was a lot of fun to compare Trump face with our faces using Vision services offered by Microsoft over Cognitive Services. More than 30 people, joined the event and discovered how Azure can improve our day to day life using AI and ML. The event was possible with the support of our local sponsor 8X8 that hosted the event and made the event possible. At this event, I delivered a workshop about Cognitive Services.  Together with the attendees, we develop an application that is able to detect people in pictures. On top of this, the application is able to identify images where the same person appear. You can find below slides and content from the event. Title : Building a Computer Vision Application Leveraging the Custom Vision Service Trainer : Radu Vunvulea Slides : Content : Title : How to design, bui

(Auto) Scaling dimensions inside Azure Kubernetes Service (AKS)

Azure Kubernetes Services (AKS) it's the perfect location where you can deploy and run your Kubernetes system. You just deploy your application, focusing on business needs and letting Microsoft manage your infrastructure. Once your application is a big success and you have a lot of visitors or clients you will need to scale-up or down based on different factors like no of users, no of views or many other specific counters. Let's see how you can scale your Kubernetes solution inside AKS and what Microsoft prepared to us to support our dynamic scaling. Of the most appealing feature of a microservice architecture is scaling, that it is more granular than on a classical deployment, providing us the flexibility to scale only the components that are on stress. In this way, we can optimize how we consume our computation resources.  Scaling dimensions Inside Kubernetes, there are different directions on how we can scale our applications. Two have them are more on the softwa

[Post Event] Codecamp Cluj-Napoca, 2018

This weekend I participate at Codecamp Cluj  and I was impressive of the event size. More than 1.000 people and 10 tracks in parallel. This free event it's now at another level from size and no of participants perspective. There were more then 50 speakers that delivered quality sessions about IT trends and trends. From AI to IoT and software processes to Java and .NET we had them all in Cluj-Napoca for one day. At this event I had the oportunity to deliver to sessions about Azure Kubernetes Services and what quantum computer are preparing for the developers. You can find below slides and abstracts. Title : Demystifying microservices inside Azure AKS Abstract : The main purpose of this session is to put on the table things that we need to be aware of when we decide to run our .NET Core applications inside Azure Kubernetes Services. Microservices is powerful but comes with a new paradigm for how we need to design our applications and write our code. Join this session if you w

[Post Event] ITDays 2018

Last week I had the great opportunity to participate at ITDays conference. Whit more then 900 participates registered and 4 stages, ITDays become one of the most relevant conferences in Cluj-Napoca that takes place in the second part of the year. There were a lot of topics and areas that were discussed, from automotive and blockchain to AR and SAP. These 2 days conference was at its 6th edition and is becoming bigger and bigger every year. I delivered a session about Azure Kubernetes Services (AKS) and tips and tricks that we need to be aware of when we start this journey. I talked about important things that you need to know about scalability, load balancing, and tools around AKS. You can find below the slides from my session. Topic : Demystifying microservices inside Azure AKS Abstract : The main purpose of this session is to put on the table things that we need to be aware of when we decide to run our .NET Core applications inside Azure Kubernetes Services. Microservices is

[Post-Event] Autumn ITCamp Community Event in Cluj-Napoca | November 7th, 2018, Romania

This week, together with ITCamp Community we organized a community event in Cluj-Napoca. There were two sessions where we talked about scraping the web and how we can automize the microservices solution on top of Kubernetes using Azure DevOps. There were more than 55 people that attended the event, being curious to find out more about best practices and lessons learned about how to write a crawler from scratch. Nowadays the limitations are not from a technology perspective but the legal side. Together with attendees, we discover how we can create a Kubernetes cluster at runtime for our environments without having to write or modify a line of code – Azure DevOps is here to make our life easier. The event was supported by UIPath , that provided us with a fantastic location and a tasty snack. You can find below the slide deck from the event and some photos. See you next time! Scraping the web for useful data : In present times, we are able to find data everywhere with a simple sea

Features branching strategy using Git and Azure DevOps

Let’s talk about different branching strategies that can be used when you use Git. Let’s assume that you are working on a project where based on what clients need (and pay) you shall be able to provide a specific list of features. Each client can decide what feature he wants to buy and use. The code is hosted inside Azure DevOps where you have a private Git repository. The solution is designed in such a way that you have a CORE layer, that it is common for all features. Because of the project complexity and how the solution is designed you cannot isolate each feature in a separate location. It means that the implementation of each feature can be found across the projects, in 2-3 different locations. A good approach that you can have is to create a branch for your CORE layer. Once this layer is stable for each feature, you can create a new branch from the CORE layer. The most significant advantage of this approach is the ability to merge different features at the moment when

HTTP 409 code when you use CreateIfNotExists with Azure Tables

A few days ago I took a look over Application Insights information that I have on one of my websites that are hosted as a Web App inside Microsoft Azure. In the last weeks I’ve made some improvements, and in theory, I should not have any errors reported by the Web App. Taking a look on the Application Insights, I notified a 409 error that was repeating pretty often. It seems that the HTTP Error is thrown each time when a new user session is an initiate, and some information is read from the Azure Table. POST azurestorageaccountname/Tables 409 The error was pretty odd initially, mainly because from the user flow and functionality, all things were on the right track. No errors reported on the UI, and the application behaved as expected. The 409 error represents reported at the moment when an entity already exists, for example when an Azure Table already exists. Looking closer on the call stack I notified that the error happens each time when ‘CreateIfNotExists’ method on the table w

IoT Home Automation | Alarm System Integration

In the last post related to home automation and IoT solution we talked about a simple watering system - In this post, we talk about how you can integrate the alarm security system and what is the approach that I used in the end. From the hardware perspective, I have an alarm system based on Paradox Magellan MG5050 with a K35 Keyboard and REM15 remote controls. On top of this, I have an additional module for messages - PCS 250 Mission To connect the alarm system to ESP8266 in such a way that I could arm and disarm the house using the ESP8266. First approach – Zone 8 To be able to control from an external device the Paradox board, you could, in theory, define a zone that can be used to arm or disarm the system. It can be confirmed using zone 8 (Z8) in combination with temper feature. Because I’m not a power user of Paradox systems, I was not able to configure it, just triggering the alarm system for

Azure DevOps - Spin-up your software development pipeline

In the last weeks, Microsoft announced a new cloud service available for all – Azure DevOps. The main scope of this post is to identify the core features of this new service and what are the differences between Visual Studio Team Services and Azure DevOps. What? Azure DevOps it is a collection of services used during the software development phase. All these services are interconnected between each other and can be orchestrated from one location. There is full support for services like: Task Management and Tracking (Agile boards, including Kanban) Source Control (Git) Build and Release for CI/CD Testing (Load and Performance Testing, manual or exploring testing) Wiki Reporting and custom dashboard capability CI/CD pipelines Extension for integration with 3rd parties Services Overview Some services are similar from the functionality perspective with the previous service called VSTS. It is normal because Microsoft replaces the original service (VSTS) with Azure DevOps t

Immutable blobs inside Azure Storage (WORM)

Did you ever have a requirement that specifies that once a document is written in storage, nobody shall be able to modify or delete it, including the administrators?  OR Do you have a special business requirement that specifies that audit data can be written only once and nobody shall be able to alter them? WORM This kind of business requirements are called WORM (Write Once, Read Many) and are common for industries like financial or healthcare. When you have such a requirement on top of Azure, there were not too many options until now. What we had until now? With available features of Azure Storage, you could define groups of users that are allowed only to do read operations. The same thing can be accomplished if you wrap everything behind an API (e.g., REST API) that manage one operation are allowed or not. In both scenarios, you still have some users or access keys that can do delete or update actions. From this perspective, it is impossible to guaranty or offers an SLA wher

How to calculate the cost of CI/CD on top of Azure VSTS

In this post, we tackle the cost estimation of a CI/CD system that is hosted on top of Visual Studio Team Services (VSTS). Context We are starting from an on-premises system that has a CI/CD system build on top of Jenkins, which deploys the web application on a local IIS. Beside web application, we have for each project an SQL Server Database that can reach up to 100GB. The source control repository is GitHub, and for task management and tracking, we have JIRA. In the current environment, we have around 20 projects that need such a migration, but the cost estimation is done project base, allowing us to have a better forecast for each project. We don’t take into consideration any other dependencies that a specific project could have like Windows Services. Most of the client is already using Azure to host their production environments for their web application. Going into the cloud is more than acceptable for them. Beside this remember that you have the code inside GitHub. In som

Using an external email provider with Episerver DXC environment inside Azure

In this post, we talk about different options that we have we need to integrate an email service with an eCommerce application developed using Episerver and hosted inside DXC. As a side note, DXC it is SaaS on top in Microsoft Azure, where clients can host their web applications developed on top of Episerver. More about the DXC environment will be covered in another post. What we what to achieve? We want to be able to integrate an external email service provider to be able to send emails to our clients for marketing purposes. Our web application it is already hosted inside DXC, and even if we could write custom code that can run inside DXC to communicate with the email service provider, there are some limitations that we need to be aware. The authentification and authorization mechanism offered by the email service provider it is based on IP whitelisting. Only the IPs from that whitelist are allowed to make calls to the service and send emails. Limitations At this mom

Data moving to/from Azure Storage using Azure Data Factory and Copy Activity

This post covers how we can use Azure Data Factory to copy our data from one location to another. Even if the copy activity might sound like a basic task that is not complex or challenging when you need to move 10 TB or 50 TB of data from one storage to another, things are becoming interesting. Reliability, resume and continue, task parallelization, data consistency are becoming essential for your project. There are multiple solutions on the market that helps you to move data to and from Azure or between different Azure locations. Today we will focus on Azure Data Factory and how it can help us to our job better. Azure Data Factory it is a service created to help us to work with data when we need to store, process and move it, without having to things about anything else. Inside Azure Data Factory there we have an activity called Copy Activity that can be used to move (copy) data from source to another. When we talk about Azure Data Factory, we call: Source the repositor

Azure Design patterns: Self-Healing & Transient fault handling

I will start a series of posts about the core design patterns that you need to take into consideration when you start to work with Microsoft Azure and the cloud in general. These principles are important in a cloud application, even if most of them are known from classical on-premises development. Even if most of them are known, we don’t apply them for on-premises system all the time. In a cloud environment, it is possible for example to have for 10ms a connectivity problem with another service that you are using. It means that you would need a retry policy in-place that would automatically try to reconnect to the service. Cases like this need to be covered by your application that is running in the cloud. Take into consideration that you need to take all of this into consideration for lift and shift cases, too. Self-Healing & Transient fault handling The today post it is dedicated to self-healing topic. This is not SF, and you don’t need to be Netflix to have a self-healing