Skip to main content

Manage bearer token when we scale on more than one server - OWIN, Katana, AngularJS

In this post we will talk about bearer token authentication and how we should manage this token when our application is running on more than one server -  when we are using OWIN, Katana and AngularJS.

Token Characteristics
First of all let’s see what are the characteristics of bearer token:
  • Generate by server
  • Contains user claims (what kind of operations a user can do/roles)
  • All information that a token contains are encrypted
  • Token information can be decrypted only by the machine that created the token
  • Expiration date is encrypted in the token itself
  • No token information are stored on the server side
  • Encryption is safe enough to be used worldwide (Facebook, Google and Twitter are using it) 
  • A token can be used by external system only when decryption key is shared
  • Easy and cheap to generate tokens
Why?
All this sounds good, but what is happening if we want to go in production. In this case we should be able to scale our backend from one node to 3 or 10 nodes.
Do we need stick connection?... to be able to redirect all the request from the same client to the same instance (because token can be decrypted only by the machine that generate the key)
NO, we don’t need this.

How?
To be able to allow other instances or servers to decrypt and access token information we need to set the machine key. This can be configured directly in the web.config. Once this configuration is done all the encryption/decryption by that application will be use them.

In the system.web section of our configuration file we need to specify the machine key node. We need to specify the validation key, decryption key and what kind of validation and decryption mechanism we want to use.
  <system.web>
    <compilation debug="true" targetFramework="4.5" />
    <httpRuntime targetFramework="4.5" />
    <machineKey validationKey="BDE5239FBD71982481D87D815FA0A65B9F5982D99DFA96E6D92B782E0952D58818B479B19FF6D95263E85B0209297E6858B57D1E0BD3EFECE5E35742D605F2A7"
              decryptionKey="8E8496D7342EA258526CF6177E04EA7D208E359C95E60CD2A462FC062B9E41B3"
              validation="SHA1"
              decryption="AES"/>
  </system.web>
Be aware, don’t use this keys, you should generate your own keys, otherwise "I" will be able to decrypt your token also.
For ISS, the machine key and validation key can be generated easily using ISS features (https://technet.microsoft.com/en-us/library/cc755177(v=ws.10).aspx) or we can use PowerShell (http://support.microsoft.com/kb/2915218#AppendixA)

Conclusion
In conclusion we saw that it is not complicated to 'connect’ multiple application instances to use and access the same token but we need to be able to share the machine key between them.
Another option is to have a dedicate machine that manage and validate token, but this is another story.

Comments

Popular posts from this blog

Why Database Modernization Matters for AI

  When companies transition to the cloud, they typically begin with applications and virtual machines, which is often the easier part of the process. The actual complexity arises later when databases are moved. To save time and effort, cloud adoption is more of a cloud migration in an IaaS manner, fulfilling current, but not future needs. Even organisations that are already in the cloud find that their databases, although “migrated,” are not genuinely modernised. This disparity becomes particularly evident when they begin to explore AI technologies. Understanding Modernisation Beyond Migration Database modernisation is distinct from merely relocating an outdated database to Azure. It's about making your data layer ready for future needs, like automation, real-time analytics, and AI capabilities. AI needs high throughput, which can be achieved using native DB cloud capabilities. When your database runs in a traditional setup (even hosted in the cloud), in that case, you will enc...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

[Post Event] Azure AI Connect, March 2025

On March 13th, I had the opportunity to speak at Azure AI Connect about modern AI architectures.  My session focused on the importance of modernizing cloud systems to efficiently handle the increasing payload generated by AI.