Skip to main content

Posts

Showing posts from May, 2015

Scaling Units - Having resources outside instances of Scaling Units

This year I had different posts where I talked about Scaling Units and different approaches of implementing Scaling Units. In this post we will discuss why it is important to not have shared resources across different scaling units (like storage, database or messages systems). Before going further let's see why we are using Scaling Units. We are using Scaling Units to be able to ensure the same quality attributes of a system for 10, 10.000 or 1.000.000 users. This is the most important thing that Scaling Unit is offering to us. More about scaling units can be found here: http://vunvulearadu.blogspot.com/2015/02/scale-units-and-cloud.html A Scaling Unit contains resources that are dedicated to a fix number of users or devices. If the maximum load is exceeded, a new Scaling Unit is added. Internally, in a Scaling Unit we will never add more resources to be able to manage a higher number of users or devices. Of course, when we implement such a system, we will identify resources

Who is manage what in cloud (IaaS, PaaS, SaaS)

I expect that all of us heard about IaaS (Infrastructure as a Service), PaaS (Platform as a Service), SaaS (Software as a Service). Nothing special with this, especially in a cloud environment. Pretty often I discover that the responsibility of the cloud provider and the customer are not very clear. Generally, we expect to have minimum responsibilities when we are using SaaS. In the same time when we are using IaaS we expect to have almost the same control as for On-Premises. But, this is not all the time true. Let's take the main component of an environment and see who is the responsible for it in Azure - the customer or Microsoft Azure. Network - The responsibility of the this component is 100% on Microsoft Azure on all the environments (IaaS, PaaS, SaaS). Azure is offering and manage the network where we have our system. Storage - Similar with Network, this is a component that is manage fully by Azure. From blob storage, to OS images and VM disks, the cloud providers need

Get the list of Nuget packages that are used by a solution (project)

There are times when you need odds things to do. For example when you develop a system you need to track external libraries that you use. Why? From a legal perspective, the licensing of each library needs to be checked and validated with the legal department and with the end client. And now, how we can get a list of all Nuget packages that we are using in a solution? The solution is simple and can be done directly from Package Manager Console (Tools>Nuget Package Manager>Package Manager Console) Get-Package The output of this command will be a list of packages that are used by the open solution. You can filter the result of get the packages that are used only by a specific project, but we didn't need something like this.

[Post Event] ITCamp 2015, Cluj-Napoca

Here we are again. At the 5th edition of ITCamp , organized in Cluj-Napoca, Romania. There were 2 days, full with new and interesting content. This year there were a lot of topics that were discussed, from technical one to business one. We saw not only Azure, but also AWS, JavaScript, Node.JS and yes the new C# 6.0. There were more than 500 people that attested to this event. With 4 tracks in parallel, it was pretty hard to decide what session to join. As usually, the list of speakers was attractive. Speakers like Paula Januszkiewicz, Andy Malone, Daniel Petri , Andy Cross,  David Guard or Peter Leeson had great and interesting sessions. The list of speakers is long, I invite you to check the fallowing link: http://itcamp.ro/speakers.html On top of this, we had two panels – a security and a cloud one. Great and challenging questions were putted during this panels. See you next year! I had the opportunity to have a 60 minutes sessions where I talked about how we can scale above

[Post Event] DevTalks 2015, Cluj-Napoca

Today I had the opportunity to be one of the speakers of DevTalks conference. The conference was pretty interesting, with 4 different tracks in parallel (Web,  Mobile and IoT, Cloud and BI/BigData). From this perspective, attendees had a lot of options. The thing that I liked at this conference was the subjects that were presented. It was not a conference only about a specific stack (only Java or Microsoft). Because of this, I decided to talk about cloud and to not take only Azure in consideration. I analysed all cloud providers that are on the market and I tried to expose what people should take into consideration where they are moving from on-premises to cloud. Based on DevTalks website it seems that there were: 416 attendees 44 speakers 40 presentations 4 even stages (tracks) There are a lot of "4" in this statistics. I hope that next year we will see "5" in their statistics. Below you can find the slides and abstract of my presentation. Title :

Updating content of messages from Azure Queues

In the era of microservices and cloud, applications contain more and more components and sub-components that are design to do only one thing. All this components needs to communicate between each other in a fast and reliable way. For communication purpose, different messaging solutions are used. Nowadays, a Enterprise Service Bus solution like Azure Services Bus Topic or BizTalk is normal. For more simple problems, one or more queues can be enough (Azure Storage Queue) Almost all messaging system allow us to consume messages using Peek and Lock pattern. This allow us to take a message from the queue, lock it for a specific time, process it and at the end, if we are able to process it with success to remove it from the queue. During this time, the message is hidden from others consumers and cannot be peeked by others. After a specific time interval if we don't mark the message as consumed the message will be available in the queue for consuming. But what should we do when w

Coding Stories - Optimize calls to Azure Table

This week I had the opportunity to make a code review for a startup. They decided to go on Azure from the first moment. The business logic that is behind the application is not very complex, but they are using pretty heavily Azure Storage, especially blobs and tables. After a load tests, they observed that there are some performance problems on the storage side. The cause of this problem was not Azure. The code that was written didn't used Azure SDK properly. Below you can find two causes of this problems. CreateIfNotExists This method is used when we don't know if a specific resource exists (like blobs or table). CloudTable fooTable = tableClient.GetTableReference("foo"); foo.CreateIfNotExists(); … There is not problem with this method as long as we don't have to many calls to it. Each time when we call it, a HTTPS request is send to Azure backend to check if that specific resource exists or not (if the resource doesn't exist, than it will be created a

Should we trust a services for our communication channel or should we use our own channel and protocol?

Nowadays there are a lot of mechanism that can be used to connect devices from the field to our own system and infrastructure. If we are looking at the services that are offered by cloud providers, we will discover that each cloud provider one or more solution that can be used at a messaging or transport platform between our devices and backed. For example, if we take Microsoft Azure, we have Storage Queues and Service Bus ‘Suite’ (that contains Topics, Queues, Relay and Event Hub). Looking at an offered like this, we ask our self what service should be used. In the end, all this services are extraordinary and can bring value to our solution. In this post we will debate about a simple question that I heard a lot of times in the last 2 years in different meetings and/or project: Should we trust a services for our communication channel or should we use our own channel and protocol?  Before jumping into the discussion let’s elaborate the question a little more. The communication b