Skip to main content

WCF and "Error in deserializing body of reply message for operation …" errors

These days I had to update a component that is used in Microsoft Dynamic AX. I don’t know to work with AX, but some time ago I had to write a component in C# that calls a WCF service of a 3rd party and is used by an AX application.
The client proxy from C# component was update with the new client proxy; the code was also updated with the new logic. The unit-tests were updated also and we had a full green. Perfect, we send the component to AX team to update the reference to the component.
They come back to us with a big ugly error:
Error in deserializing body of reply message for operation …
WTF, we had the same endpoint. We had a client proxy that works on our unit-tests and a command line test application, but when AX make calls using our component we get this great error message. The oddest thing is that we receive this error even if the message that is returned from the endpoint has only 20 characters. Before updating the AX worked great with our component.
Looking over the internet we found we should need to increase the message size. We increased the size of the message like this:
<readerQuotas maxDepth="999990" maxStringContentLength="999990" maxArrayLength="999990" 
maxBytesPerRead="999990" maxNameTableCharCount="999990" />
This can be updated from code, if you have a custom binding that is set from code:
bindingElement.ReaderQuotas.MaxArrayLength = Int32.MaxValue;
bindingElement.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue;
bindingElement.ReaderQuotas.MaxDepth = Int32.MaxValue;
bindingElement.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue;
bindingElement.ReaderQuotas.MaxStringContentLength = Int32.MaxValue;
Get what? The problem was solved, everything works as expected. The interesting thing is that even if the message from the servers was very small we still received the error message if we don’t set the reader quotas.
The error was pretty interesting, because we received this even if the message was extremely small.

Comments

  1. Or in other words: 'If nothing else works, get a bigger hammer..' :-)

    ReplyDelete
    Replies
    1. It seems that this there is a problem related to this on AX. This is why X++ is not for us :-)

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP