Skip to main content

Azure Storage authentication using Azure AD

Cloud and Azure became a complex environment, with a high number of services and things that you need to take into account. Even so, you still use the base services in the day to day work like Azure VMs, Storage or Network capacities.
Azure Storage it is one of the most used services inside Microsoft Azure directly or indirectly. Any other services that are running inside the cloud need this services to be able to store or work with data.
Until now, access control to Azure was possible using Key Accounts or Shared Access Signatures (SAS). The combination of this two is powerful and can cover most of the scenarios of small and mid-size companies.
Issues appear for enterprise customers, where (Azure) Active Directory is part of their core services. For them, access management to Azure Storage is crucial to be controlled using AD. Especially for an organization where there are 10.000 employees or more, sharing access to resources using SAS tokens is possible, but adds extra complexity to the system. The problems appear in the moment when you need to remove access to specific storage, where things are not so straightforward when you use Shared Access Signatures or Shared Access Policies.

In the second quarter of this year, Azure team announced the support of Azure AD authentification and authorization for Azure Storage. These give us the perfect mechanism to be able to control access to data using a consolidated solution over all the organization. It enables IT team to be able to remove or allow access to Azure Storage content on the fly, based on user roles.
This new feature in combination with Manage Service Identify (MSI) allows teams to assign specific AD roles to application and services that are running on top of Azure, not only to people. In this way, even at the application layer, we can have granular access control to Azure Storage content.
Behind the scene, a request is made to Azure AD using OAuth 2.0 protocol. The Azure AD provides an access token to the application that can be used to access Azure Storage. The first thing that needs to be done is to configure Azure Storage RBAC to enable access to a security principal, specifying what you want to access and permissions rights.
The full configuration flow can be a little complex because involves creating and registering an Azure AD Application and grand access to Azure Storage, but the process is well documented and is straightforward.

Things that you shall consider:
  1. At this moment the service is available only for Blob storage and Queues
  2. The Storage Account needs to be created using Resource Management model
  3. Custom RBAC roles are supported
  4. Access control can be controlled at container and queue level (at blob level it is not recommended)

Being in preview the following limitations exist:
  1. Page blobs access for premium storage it is not yet available
  2. Logging information for Azure Storage Analytics it is not yet supported

This great new feature allows us to integrate and manage storage access in a centralized location, together with all other systems that are using Azure AD.





Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see