Skip to main content

Certificates and resource management

Last month I had two posts where I tacked a little about certificates in .NET (12).
In this post I want to discuss about resource management when X5099Certificate2 is used. This is applicable for X509Certificate also.
When we use a .NET object we usually check if IDisposable interface is implemented. If yes, than we will call ‘Dispose’ method. But unfortunately X509Certificate don’t implement this interface.
Behind this class there is a handle (SafeCertContextHandle) to unmanaged resources. Because of this, each new instance of the certificate will have a handle to unmanaged resources. If you need to process 2.000 certificates from the store it will be very easy for you to get a wonderful “OutOfMemoryException”.
To be able to release all the resources that are used by a reference to a certificate you will need to call the ‘Reset’ method. This method will release all the resources that are associated with the specific certificate.
X509Certificate2 cert = …
…
cert.Reset();
X509Store has the same story and you will need to call ‘Close’ method.
X509Store store = …
…
store.Close();
The ‘Close’ method is pretty simple to be seen when you look over the methods that are available. The name gives you an hint that there are some resources that needs to be released, but the ‘Reset’ method name don’t help you too much with this. Because of this it is very simple to end up with an “OutOfMemoryException”.
I don’t understand why X509Certificate2 doesn’t implement IDisposable interface. The resource-free idiom should be apply also in this case also.

Comments

  1. Somebody at MS seems to be to busy to implement this:
    http://connect.microsoft.com/VisualStudio/feedback/details/414020/x509certificate2-shoud-implement-idisposable-and-call-the-safecertcontext-dispose-method-on-the-safecertcontexthandle-member
    :)

    ReplyDelete
    Replies
    1. The status is 'CLOSED - Won't fixed'. The ticket was open in 2009.

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP