Skip to main content

Part 2 - Shared responsibility / The landing zone of a PCI-DSS compliant application inside Microsoft Azure

In the previous article, we talked about the core concepts of PCI-DSS and the impact of storing, processing and transmitting credit card data. 

This article focus on the shared responsibility concept and the importance of having all the parties at the same table. I promise that starting with the next article of this series, we will go on the technical side, but for now, we need to have a clear understanding of who the players are and what the responsibility level of each of them is.

The solution we plan to build is built around Microsoft Azure, Azure Kubernetes Services (AKS), Azure SQL and Azure CosmosDB. Considering these services, there are 3 main parties at the table:

  1. Microsoft Azure
  2. Kubernetes (AKS)
  3. The solution owner (the customer)
Depending on the solution, a shared responsibility exists between all of them, covering five main aspects:
Infrastructure, Access control, Network Security, Data protection, Malware detection. The combination of these five main aspects and the three parties define and shape what the shared responsibility model looks like. 

Microsoft Azure
The primary responsibility of Microsoft is to ensure that they provide all the PCI-DSS attestation of compliance, including the audit reports. They offer clear information related to who is responsible for what at each service level through a responsibility matrix that covers each PCI-DSS requirement. 
For compliances like PCI-DSS, HIPPA, SOC and many more, a specific set of services and functionality was built inside Microsoft Azure, giving us (the customer) the ability to build seamless systems that are  PCI-DSS compliant.
The compliance matrix is available here.
Kubernetes (AKS)
AKS is built on top of Kubernetes, which is an open-source system. The only thing that Microsoft is offering on top of it is the ability to spin, run, scale, and manage large Kubernetes clusters without the complexity of managing the cluster and the infrastructure that is behind the scene. 
It is not simple, but it makes life much more accessible and looks so effortless. 
The customer
The customer is the final owner of the solution and how decides to build the solution. He has a complete list of Microsoft Azure services and features that are PCI-DSS compliant. 
The customer needs to have a strong understanding of PCI-DSS to ensure that the right Azure services are used, and that all the processes are successfully implemented. 


The three parties are forced by the compliance requirements to build a system that is not time optimal from the cost point of view, technology and flow. Nevertheless, it ensures that credit card data are well protected once a user types his credit card data inside the system. Later on, in this series, we discuss why for small systems, you can end up paying for virtual security appliances more than for AKS cluster.

Microsoft Trust Center
Microsoft Trust Center is the entity inside Microsoft that ensures that compliance, privacy and security best practices are provided and regular audits and attests from QSA are done for all compliances, including PCI DSS. 
It should be the starting point of your journey if you want to build a compliant PCI-DSS system using Microsoft and Microsoft Azure technologies. 

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP