Skip to main content

ASP.NET vNext - Deploying your own version of .NET Framework

In this post we will talk about a new feature of .NET, that will allow us to deploy the .NET CLR with the application itself. No more .NET installation and versioning problems… or not
The new version promises us that we will be able to include in the application package all .NET dependencies that we are using. This mean that we can run our application on machines where we don’t have .NET installed. When we create our build, the package will contains also all the .NET resources needed.
For example the client will not need any more to install the .NET framework. This is great, because there were cases when we had to install .NET 4.0 and also .NET 4.5 on different machines because of the dependencies.
In the same time, the client will have locally only .NET components that he is using. For example we don’t need WCF or WF components installed on his machine if the application don’t use it. The deployment and setup step will be simpler and we will not use need any more to consume client storage space with the full stack of .NET framework.
Another nice thing is related to versioning. We don’t need to care anymore what version of .NET the client have on the machine. We can directly include .NET component directly in it. In this way the CLR versioning is becoming simpler, clean and in self-hosting.
In this beautiful story people could see a problem. Let’s assume that you are developing a 5 different applications for an enterprise client. He will need to install all of them on his machine. Because each application will come with his own .NET dependencies you could have theoretically the same .NET components duplicated (with the same version). On the other hand, the storage is pretty cheap, the size of .NET framework is not very big and on top of this, you include only the .NET components that you are using. Because of this you will not have a 1.6 GB of .NET stuff there, you can have only 200 MB or less.
I would say that you have a lot of advantages using this way of deployment. The cost of storage is pretty low and the .NET footprint is extremely low. Keep in mind that you have all the dependencies in your own project and you can update them independently.
Our application folder will have all the CLR and .NET framework similar with a NuGet packages. You can even specify for each sub package of   .NET framework what version you want. For example we could have Microsoft.Asp.Net.MVC version 4.5.2.1.0 and Microsoft.AspNet.Hosting 5.4.2.3.
In conclusion we could say that we will have more flexibility than we had until now, our life will be easier. I expect on real application, especially in the one that are big to have problems with versioning, updating and things like this. But, no blockers and in the end we gain a lot from this feature.

Comments

  1. Nice, but I'm wondering when a bug or security exploit is discovered in one .NET Framework version, how will the admin (or Windows Update) patch all these self-contained versions of .NET quickly.. :) (if they are scattered on various folders and not side-by-side in GAC)

    ReplyDelete
    Replies
    1. In the same way you make an update for a NuGet package (EF for example).
      They are only in the alpha version. I can bet that they will come with a solution for this problem.

      Delete
    2. Can a domain admin push an update to an assembly (DLL) to 100 client workstations using NuGet only?

      If each application will have it's own 'private' copy of a certain .NET Framework version (not in GAC), the application vendor will be responsible to update it, unless somehow the admin will have a built-in way to discover all applications that have a certain .NET version..

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP