Skip to main content

Posts

Showing posts from September, 2015

ListBlobsSegmented - perfect to iterate in large containers - Azure Storage Blobs

In this pot we will talk about 'ListBlobsSegmented' command, that allow us to get blobs from a container. When this command is useful? 'ListBlobsSegmented' is used when we need to fetch the list of blobs that are under a container of Azure Storage. This command will not fetch the content of blobs, only the metadata of blob will be fetched. Based on this information, if needed we can trigger the download. An important thing is related to the number of blobs that will be fetched when we make a call. The number of blobs that will be retrieved from a call is maximum 5.000 blobs metadata. If the container has more than 5.000 items, the response will contain also a BlobContinuationToken. This token can be used to fetch the next 5.000 blobs from the container. The size of the result cannot be changed. We cannot modify this value. Example: BlobResultSegment blobResultSegment = blobContainer.ListBlobsSegmented(new BlobContinuationToken()); while (blobResultSegment.Contin

What to do when I receive 502 error code on an Azure endpoint - HTTP Request failed. Error Code: 502.

From time to time we started to receive from Azure Web 502 HTTP error code during some load tests. In this post we will talk a little about what can be the root cause of this error and how we can manage it. HTTP client exception: HTTP Request failed. Error Code: 502. This error does not appears very often, but can be annoying. Especially because the root cause of this error cannot be traced easily. When you have an Azure WebApp (Azure WebSite) or an Azure Web Role this error is not returned by your application. This error is returned in most cases by Azure Load Balancer that plays the role of ARR (Application Request Routing). When ARR doesn't receive a response from your application in 3 minutes (default timeout for Azure WebApp), a 502 error is returned. For ARR this means that the system is not in a good health, it could be even in a Pending state. The 3 minutes timeout period is specific for Azure Web App (Azure Web Sites) Solutions First think that you need to do is t

App Service Plan - Why is important for Azure Apps

In this post we will talk about App Service Plan that exists for Azure App Service. The main scope of this post is not to cover all the details, but to put on the table the small things that can make a difference. Do we have a service plan for Web/Worker Roles? No, App Service Plan exists only for Azure App Services like Web Apps, API Apps, Logical Apps, Mobile Apps and so on. Why when I increase the number of instances of a specific Web App I increase automatically the size of the rest of the Web Apps from the same service plan? All resources are shared between all the applications from the same App Service Plan. This means that when you increase the number of instances, you will see this change on all Apps from the same App Service Plan. When I use the same App Service Plan does multiple apps share the same physical resources? Yes. All Azure Apps under the same App Service Plan are using the same resources. For example if you have 3 Web Apps under the same App Service Plan,

Why not to use StopWatch when you need to measure the duration of an HTTP request in WebAPI

In this post we will talk about how we can measure how long it takes for a HTTP request to be executed on an ASP.NET MVC application. All the tests are done using a web site hosted on Microsoft Azure. The instance used for this purpose is Shared - F1. Let's assume that we have the following requirement: At the end of each HTTP request you need to add to the logs information related to request duration. The first solution that could come into our mind is to use "HttpContext.Current.Timestamp" to calculate the duration of a request. In theory we could calculate the difference between "DateTime.Now" and timestamp from "HttpContext". protected void Application_EndRequest() { Trace.WriteLine(string.Format("Request duration: {0}", (DateTime.Now - HttpContext.Current.Timestamp).TotalMilliseconds)); } As we can see in the above example, we added this logic in the "Global.asax" file, in the "Application_EndRequest

Task.Unwrap() - A useful proxy to avoid inner Task inside a Task

In this post we will talk about Task and what we should do when we end up with 'Task<Task<Foo>>". Let's start with a simple example. Let's asume that we have an async method. public async Task<int> DoSomethingAsync() { return await GetNumberAsync(); } We have the 'DoSomethingAsync' method that we need to call inside another task. If we call this method directly we will end up with Task<int>, but if we call this method in another Task than we will end up with... Task<int> simpleCall = DoSomethingAsync(); Task<Task<int>> complexCall = new Task<Task<int>>( async () => { return await DoSomethingAsync(); }); As we can see, to be able to call a async a method in a task we will need to add the 'async' attribute to the lambda expression (function). Because of this we will get a Task of Task (Task<Task<..>) and not a simple Task<...>. You could say that this

Azure Service Bus Premium - Better latency and throughput with predictability

A few days ago the public preview of Azure Service Bus Premium was announced. The first two things that people usually check at a new service like this is price and the added value. In this post we will talk about them. Added Value The biggest things between the standard offer and Premium is the dedicated VMs that are reserved only for the end customer. Each VM that is used by Azure Service Bus Premium is isolated from the rest of the customers and will be used only by him. This means that the performance of the Azure Service Bus will be predictable. This is a very important quality attribute, especially when you need to know exactly how long it takes for a message to arrive at final destination, when latency and throughput needs to be predicted. When you decide to use the Premium offer, the storage engine that will used behind the scene will not be the standard one, used by Service Bus, but the new one, used by Azure Event Hub - so called Jet Stream. In this way, we can have ded

Azure Storage - Client Side Encryption

A few days ago, client-side encryption for Azure Storage was announced. In this post we will take a look over this feature. First of all, you should know that the encryption/decryption takes place on client side. This means, that the content will be already encryption when it will arrive on Azure. This encryption technique is called Envelop Technique. It is very useful when you want to add another security layer over your data. Out of the box, there are client library for .NET (including Windows Phone). For other languages, like Java is not yet supported, but because the encryption algorithm is a well know one, you may be able to implement it on other platforms also. The encryption algorithm that is used by client library is AES (Advanced Encryption Standard). It is important to know that the encryption keys are generated by the client library. And is NEVER stored in Azure Storage. The encryption key should be stored in a different location. This library is full integration with K

Azure Service Bus - How to extend the lock of a message | RenewLock

In this post we will discuss about Azure Service Bus Topics and Queues, with a special focus on Peek and Lock feature. Introduction Azure Service Bus is a messaging system that allows us to send messages between different systems in a reliable and easy way. A lot of concepts from ESB are implemented by Service Bus, allowing us to do do magic stuff with messages. There are two ways to consume messages from Service Bus Peek and Lock - locks a message for a specific time internal and notify Service Bus when we want to mark the message as processed (removed from Service Bus) Receive and Delete  - once a message is received from Service Bus, it is also deleted automatically from the messaging system Peek and Lock When using Peek and Lock, by default we lock the message for 60 seconds. This means that in this time interval the message is not available/visible for other consumers. Once we process the message we can mark it as processed. If we don't mark the message as processe

SQL Azure - Improve restoring time of BACPAC during load tests

SQL Azure is a great service when you need a location to store a relational database and you don't want to manage the infrastructure that is behind it. In only a few minutes, you can get a powerful database ready for your needs. In almost all projects life cycles, there is a moment in time when need to run one or more Load Tests. For each scenario, you may need a different database setup. In the above example, we have 3 different scenarios that we want to test. This means that, we need to load 3 different bacpac files. If the database is relatively small (10MB) than we will not have any kind of problems. But with a large database, the restoring process can take some time. Behind the scene, a database restoring process is complex and CPU intensive, because of this it will require time and CPU. This will not be a problem for a database of type P1 or P2, where the restoring process is fast. But, for a S0 or S1, we may need to wait for even a few hours, until our backup is resto