Skip to main content

Posts

Showing posts from February, 2013

log4net - Some fun with appenders

One of the most frequent logger mechanisms that is used in our days is log4net. This is an open source framework that can be found on different programming language, like Java and C#. I played a little with log4net in the last period of time to see if log4net can become a bottleneck of an application. What I observed until now is the call duration of the appender is the longest call. Even if C# knows about async calls to IO resources, log4net will make all this calls in a sync way. Usually if we have around 100 messages per second, the default log4net mechanism will be perfect for us. For these cases, if you want to improve the performance we should use the TraceAppender. Yes, the default appended from .NET. It works great and is pretty fast. This is a pretty good option if you don’t want to use a buffer appender. There are a lot of frameworks that used Trace – don’t be afraid of using it. Another option is to use buffer appender. This is an appender that will not send messages

Converters and IoC containers

Now, using an IoC container is a have to for any kind of project. Even in small and simple project, people started to use an IoC. It is very hard for me to understand why you would use an IoC for any kind of project, because you will raise the complexity without needing it. Also people add to IoC all the objects, they don’t ask themselves if they really need that object in IoC. What do you think about converters added to the IoC? Very often people add all the converters to the IoC. Even if there are converters used in only one class. In a big project we can end up with hundreds of converters in the IoC container. Theoretically you could have the case when you need to replace a converter with another one, but how often this will be needed. Also, if you have converters that are used in only one place, changing the converter will required to change the class also (maybe). I would not add any kind of converters to the IoC. The only type of converters that I would add to the IoC are the

Windows Azure Virtual Machine - Where is my data from disk D:

Today we will talk about what kind of disk are available on a virtual machine (VM) on Windows Azure. I decided to write this post after someone asked me why the content of disk D: was lost after a restart. He told me that this is a bug of Azure, but is not. When you are creating a Windows Azure Virtual Machine you will observe that you have more than one disk attached to it. There are 3 types of disk that can exist on a VM on Azure: OS Disk – drive C: Temporary Storage Disk – drive D: Data disk – drive E:, F:, … The OS disk contains the operating system. This is VHD that can be attached to the machine and contains the operation system. You can create custom VHD that contains the operating system and all other application that you need. In this moment the maximum size of this disk is around 124GB. If you need more space, you can use the data disks. Each VM can have one or more data disk attached to it. Each VHD that is attached to the VM can have maximum 1TB and the maximum

Representing sub-system dependencies

When we start designing a big system we might also think on how to split our solution in sub-systems. We can end-up having a lot of sub-systems with different dependencies.  The same thing will happen if we start splitting a sub-system in components, we will have a lot of components with different dependencies. How we can represent these dependencies in a simple and clean way? I saw different solutions where you can end up with complicated schemas or with trees. Both solutions are complicated to read and people with spend some time understand these dependencies. X depends on Y and Z and so on. This month I read “ Software Architecture in Practice ” written by L. Bass, P. Clements and R. Kazman. I discovered a great and simple way to represent all these dependencies. Dependencies can be represented in a simple table where we will have on diagonal our sub-systems, or different components. Each input resource that is needed by a sub-system will be on columns. Each resource will be p

Shared Access Signature and Access Level on Blob, Tables and Queues

Some months ago I have some posts about Shared Access Signature (SAS). Yesterday I received a simple question that appears when we start to use SAS over Windows Azure Storage (blobs, tables or queues). When I’m using Shared Access Signature over a blob, should I change the public access level? People can have the feeling that from the moment when you start using the SAS over a container or a blob, people will not be able to access the content in the classic way. SAS don’t change the public access level, because of this, if your blob is public, than people will be able to access it using SAS token or with a normal URL. To be able to control the access to a container or to a blob using only SAS you will need to set the access level of the content to private. This can be made from different locations (Windows Azure Portal, different storage clients or from code). Having a container of blob with the access level set to private means that people with account credentials will be able t

SQL Azure Export Process - Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name]

Using a database involve a backup mechanism. If we are using SQL Azure and the new Windows Azure portal we will be able to make manual backup very easily. In the case our database is deployed long time ago, you can discover that an error will occur during the export process. This error appears usually only at store procedures and the error message is similar to this: Error SQL71562: Procedure: [dbo].[store_procedure_name] has an unresolved reference to object [database_name].[dbo].[table_name] This is happening because your store procedure contains in the table path or in the store procedure path the name of the database also. The solution for this problem is quite simple but will require changing of all your database scripts that contain the database name in the path. Each line of script that refers to a specific database named need to be changed in a way that will not contain the database name. After you make this change you will need to update all your scripts. Before: INSERT

CRON job in Windows Azure - Scheduler

Yesterday I realized that I have to run a task on Windows Azure every 30 minutes. The job is pretty simple; it has to retrieve new data from an endpoint. This is a perfect task for a CRON-base job. The only problem with the current version of Windows Azure is that it doesn't have support for CRON jobs. People might say that we can have a timer and every 30 minutes we could run the given task. This is a good solution and it will work perfectly, but I wanted something different. I didn’t want to create my own CRON-base job. I wanted something build-in in the system. I started to look around and I found an add-on for this. So, Windows Azure offers a Store for any company that want to offer different add-ons for Windows Azure. These add-ons can be very easily installed. If they are not free, the payment method is quite simple. Each month the Azure subscription will contain the cost of these add-ons. From my perspective this is a pretty simple and clean mechanism of payment. Under th

Workflows over Windows Azure

Nowadays, almost all enterprise applications have at least a workflow defined. Not only complex application need to contain workflow. Even a simple ecommerce application can have a workflow defined to manage the orders or the product stocks for example. Supporting a workflow in our application can be made in two ways. The first approach is to search on the market what kinds of solutions are available and choose the most suitable for our project. Using this approach will offer a workflow mechanism, but in the same time can generate other costs through licensing and/or developing custom functionality.   The second approach is to start developing the workflow mechanism from scratch. This solution can be pretty tricky because there are a lot of problems that need to be resolved. Failover mechanism, rules definition, guaranty that each message from the workflow will not be lost and many more needs to be define and implement by our own. All the data that are flying through the

Scalability points on Cloud

Cloud - another buzz word that we hear almost every day. For the moment the providers that offer this service are: Amazon, Microsoft (Windows Azure), Google, Rackspace. When we think about cloud what comes to our mind? One, two or more instances that we keep on the cloud and when we need more resources we can grow very easy the number of instances. At the moment a cloud provider like Microsoft gives use some scalability points. In this article we will find out how to create a scalable cloud application and how to use the services that Windows Azure provide us. Content Delivery Network Let’s say that we have a web application that has a lot of static content. When we say static content we think about images, CSS, HTML that doesn’t change every second (at every request). Normally when we observe that the loading on our machines is quite high we will try to increase the instance number. This seems to be a good idea for our problem, but in terms of cost it might not be the best. Even t