Skip to main content

Posts

Showing posts from June, 2013

TechEd 2013 -Day 3

The 3rd day of TechEd had finished. At the keynote I had the opportunity to see a demo of an application for Windows Store for which I write a part of the code a design it. Also I discover that over 66% of the companies has the bring your own device policy active. For them, Windows 8 and Windows Phone 8 came with special features that secure and protect their private data. Because of this, companies like SAP will have all their applications for sale force migrated to Windows 8 (until end of 2015). From the developer perspective, we should try the new  Visual Studio 2013. It has great features especially for load test. Even if in the preview it is pretty stable. Also, if you are starting to develop a client-server application, that will communicate over http/s than… go with  Web.API. Why? Because Microsoft invest and will invest a lot of resources in that area. WCF is good for TCP/IP and other protocols, but for HTTP/S they recommend is Web.API. The last presentation of the day was ve

TechEd 2013 - Day 2

Second day of TechEd 2013 has ended. For me, this day was full off interesting information related to SQL Server and Windows Sever. From my perspective, the most interesting sessions of the day where related to security. During this sessions I realize that we are extremely vulnerable to attaches, even if you change the server password every 4 hours. An attack can be made in 5 seconds – the same problem is on the Linux system, not only Windows. Things that I consider interesting: Foca – it is an interesting tool to discover what the public content that is listed by an internet endpoint. It will extract meta-information like the name of the users, machines, software version and so on.  It is extremely easy to modify a worm or trojan to make it undetectable and for 100$ you can buy an application that give you the ability to “manage” the infected machine.    You should NEVER have an async method that return void, except when you are working with event handler. If an exception occu

TechEd 2013 - Day 1

First day of TechEd Europe 2013 has just ended. What can I say? Wow. A lot of interesting sessions about Microsoft technology. I participate to a lot of session on different topics like big data, legacy code, loading and performance testing into the cloud and how we can increase the performance of our application. Let’s see some ideas that I noted and I want to share with you: Creating a virtual network with the machines that we have into the cloud we can make remote debugging very easily from Visual Studio. With the new version of SQL Server and having in-memory database we can increase our performance with a factor of 6x. If we recreate the store procedures the new SQL Server will create binary code for the store procedure (as DLL) – we can gain up to 26x better performance. A testing and reviewed code will contains up to 70% less bugs. PerfView is a great tool, I need to check it out. If you deploy a VM or a web site to Azure you can win a super car (Microsoft content –

TechEd 2013 - Pre-Conference Day

The pre-conference day of TechEd has finished. It was a full day with interesting seminars. For me, this morning was pretty hard to decide at what seminar I want to participate. I wasn’t decide what seminar to choose between “Enterprise Agility is Not an Oxymoron” and “Extending Your Apps and Infrastructure into the Cloud”. In the end I decided to go to the second presentation, where I found out a lot of cool information related to this topic. In the next part of the post I will enumerate some interesting information that I discovered: In this moment, most applications that are hosted on Windows Azure are ON AND OFF or PREDICTABLE BURSTING. To tell you the truth, I didn’t expected the ON AND OFF apps to be on top.  Over 98% of organizations use and have virtualization. This is a huge opportunity for cloud providers. Be aware of the SLA. If the provider cannot offer the uptime from SLA you will receive the money back for the downtime, but you will not recover the money that you

WPF - Binding of TextBox in a Grid

Today we will talk about WPF and how we improve the loading time of our window. One of my colleague started to have a performance problem with an application written in WPF. The application has to run on some virtual machines with Windows and .NET 3.5 The application has a window with a grid that contains 200 cells. The model behind the view has an object with 50 property. I know we have too many properties, but this is the model, we need to optimize the loading time of the page and not to make a review. public class MyModel : INotifyPropertyChanged { public Foo Obj1 { get; set ...} } public class FooModel : INotifyPropertyChanged { public string P1 { get; set ... } public string P2 { get; set ... } ... public string P50 { get; set ... } } This model is bind to a grid that contains 4 columns and 50 rows. Each property is bind 4 times to the cell controller. The controller of the cell is a simple TextBox (this is a sample code).  The grid is similar to this:

Topic Isolation - (Part 6) Testing the limits of Windows Azure Service Bus

Some time ago I wrote about how I manage to process millions of messages over Windows Azure Service Bus. I discover that processing millions of messages every hour will increase the latency and response time of our topic. I wanted to find out what I happening with the topics from the same account and data center when the load of one of them increase drastically.  To be able to make this tests and discover the answer to my questions I made the following setup: Create 3 topics on the same data center. Two on the same account and another one on a different account. Create a worker role that will read messages from topics (via subscription) and monitor the delay time. The results will be written to Windows Azure Tables. Create 8 worker roles that will push hundreds of thousands of messages in a very short period of time on the same topic (multi-threading rules). Create 4 worker roles that will consume messages from our topic (via subscription). In the end we end up with 1

Coding Stories VI - Inconsistent property naming

Looking over a code I found something similar to this: public class Ford { ... public int Power { get; set; } } public class Dacia { ... public int Power { get; set; } } public class FordModel { ... public int CarForce { get; set; } } public class DaciaModel { ... public int CarForce { get; set; } } From the naming perspective, we have two different names of the properties that has the same naming. Because of this we can be misunderstood and developer will need to look twice to understand what the hall is there. public class Ford { ... public int Power { get; set; } } public class Dacia { ... public int Power { get; set; } } public class FordModel { ... public int CarPower { get; set; } } public class DaciaModel { ... public int CarPower { get; set; } } Now is better, but there is still a problem. Looking over the Ford and Dacia class we notice that there is a common property that define a Car. In this case we should have an interface or a base

Coding Stories V - Serialization with XmlArrayItem

Serialization, almost all applications need this feature. Nowadays serialization can be in different formats. The trend now is to use JSON, but there are many applications that uses XML. Let’s look over the following code: public class Car { public string Id { get; set; } } public class City { [XmlArray("Cs")] [XmlArrayItem("C")] public List<Car> RegisterCars { get; set; } } ... XmlSerializer serializer = new XmlSerializer(typeof(City)); serializer.Serialize(writer, city); Output: <city> <Cs> <c> <id>1</id> <c> <c> <id>2</id> <c> <Cs> </city> Even if the code compiles, works perfectly, there is a small thing that can affect us. Because we use XmlArrayItem attribute, each node from the list will be named “C”. If we will need to deserialize only a C node then we will have a surprise. This cannot be done with the default XmlSerializer class. XmlS

MapReduce on Hadoop - Big Data in Action

In the previous post we’ve discovered what the secret of Hadoop is when it needs to store hundredths of TB. Based on a simple master-slave architecture, Hadoop is a system that can store and manipulate big amount of data very easily. Hadoop contains two types of nodes for storing data. The NameNode is the node that plays the role of master. It knows the name and locations of each file that Hadoop stores.  It is the only node that can identify the location of a file based on the file name. Around this node we can have 1 to n nodes that store the files content. The name of this kind of nodes is DataNode. Data processing Hadoop stores big data without any kind of problems. But it became known as the system that can process big data in a simple, fast and stable way. It is a system that can process and extract the information that we want from hundredths of TB of data. This is why Hadoop is the king of big data. In this post we will discover the secret of data processing – how Hadoop m

Code Review and Under Stress Teams

Nowadays, being agile is a trend. All the projects are agile in their own sense and of course each task needs to be reviewed. Today I would like to write about the review step. The code review step is the process when a person looks over the code and tries to find mistakes that were missed by the developer that wrote that code. There are different ways how reviews are made. Normally, in big teams the review is made separately, without pair programing or similar things. Because people don’t make the code review as they should, the value of this process decreases. In the end the management will realize that there is no value in code reviews and they will stop allocating time for this. Why did they end up with this bad idea? Because THE developers that are doing the reviews are not doing it right. For example when the developer doesn’t have time for all tasks and is using the review time to finish other tasks. In this case the review will not be made. In the best case scenario they

Certificates Hell - How you can kill all your clients

Certificates, a lot of times I saw applications in production were down because of them. From big companies to small companies, each one had a lot of problems because of certificates. I would name this problems – Certificates Hell. Each problem that appeared because of them was caused by people or by a wrong process. In this post I will tell you a short story on how you can make over 10.000+ clients unhappy. In a web application or a client/server application you need to use secure connection and of course certificates. The application contains a backend that is pretty complicated and more web applications. Additionally to this, there are also some native applications on iPhone, iPad, Android and Windows Phone.  They are in production for about 5 years. In this period of time the web application changed, they started to create different native applications for mobile devices. Each native application contains also some certificates used to authenticate users and establish secure con

Coding Stories IV - Ordering a list

Let’s look over the following code: public List<Result> SomeFunction(string machineId) { ... if(machineId == null) { ... List<Result> results = GetAllResults(); return results.OrderByDescending(x => x.Date); } else { ... return GetResultsByMachine(machineId); } } In production there was a bug reported by client. From time to time, clients received elements in wrong order. The cause of this problem is not from the order function. This is a core function of .NET that works perfect. The ordering is not made on the both blocks of the IF. Because in general the list were already ordered in database and the clients usually didn’t reach the ELSE block of the code, these case appear very rarely. public List<Result> SomeFunction(string machineId) { ... List<Result> results if(machineId == null) { ... results = GetAllResults(); } else { ... results = GetResultsByMachine(machineId); } re

Windows Azure Billing model - per-minute granularity tip

Windows Azure is changing. A lot of new features were released and existing one were improved. In this post I want to talk about one thing that was changed on Azure – the billing model. Until now the paying granularity was per hour. This means that if I use a web-role for 30 minutes, then I would pay for an hour. If I use it for 1 hour and 1 minutes, then I would pay for 2 hours. Because other cloud providers offer services with per-minute granularity, Microsoft also decided to change the granularity to minutes. This is great, you will pay only the time when you use a compute resource like web-roles, worker-roles, VMs, Mobile Services and so on). For classic application that use Azure for hosting and don’t scale (up and down) for short period of time this change will not affect the bill value at the end of the month – we will have the same flat value. The true value of per-minute will be for application that scale up for very short period of time. For example we have a scenar

[Post-Event] Cloud Era at Faculty of Science - Sibiu

This weekend I had the opportunity to be invited by Faculty of Science from Sibiu. I have a 3 hours session where I talked about cloud in general and what are the feature of Windows Azure. Even if the session was during Saturday morning, a lot of students were interest to discover the secrets of cloud and Azure. My slides and the demo code can be found at the bottom of the post. Slides: Cloud and Windows Azure from Radu Vunvulea Demo: http://sdrv.ms/13gbjVW