Skip to main content

COVID-19 is a cloud security catalyst

Almost two years ago, we had to accept that the world as we know it had changed. Most of the companies migrated their workforce to remote working. Nowadays hybrid work model is part of the normality. Companies started to invest more in their digitalisation and cloud adoption programs. In this article, we talk about the current state of security and the top 10 security things that we need to invest in when we do a cloud adoption (cloud migration). 

Cloud adoption and remote working changed how we build IT Solutions and how we tackle security. From my point of view, COVID-19 was and is a security incentive, exposing us to new digital risks and making us more aware of IT and cloud security. 

Security Impact

To understand the real impact of the current situations, let's take a look at some statistics

  • Global adoption of digitalisation has increased to 55% in the last seven years
  • 48% of the companies had to accelerate the cloud migration programs during the pandemic
  • 60% of the companies adjusted cloud cybersecurity postures as a result of distributed workforces
  • In 2021, the top 2 priorities for companies are:
    • 38% of them securing remote workforce is the highest priority
    • 36% migrating to cloud-native services is the highest priority
We might ask ourselves, "Why security is such a big deal nowadays?". 2020 was not easy for cybersecurity teams. Every 2-3 days, we read in the press that another data breach happened. I would even say that data breach in 2020 was a commodity and part of our life. 
How many security breaches were in  2020? Let's take a look at a security report that helps us to understand better where we were in 2020:
  • The no. of cyber-attacks increased by 250%
  • The no. of large-scale breaches increased by 273%
  • 47% of individuals fall for phishing scamp while working at home
  • Phishing attacks increased by 350%
In just 4 months (Feb-May 2020), more than 500.000 people globally were affected by breaches where personal data of video conferencing users was stolen and sold on the dark web. Information like their name, email address, profiles was collected from unsecured or public video events and sold. 

Cloud Impact
The demand for cloud services, especially computation, increased drastically in the first 6 months of 2020. The average increase of cloud workloads was around 65%, putting the cloud infrastructure at the test. The highest growth was in APAC and EMEA with 70%, followed by AMER and JAPAN.
Industries like Retail and Insurance increased their cloud footprint by 60% and 74%. We would expect this because most of the shopping was done online. Retail companies that could not move to digital platforms were struggling in that period. People's behaviour also changed, preferring to resolve their problems over the internet (e.g. insurance, grocery). 

The higher increase of cloud demand was in chemical manufacturing. In a short period, they had to run a lot of calculations and simulations. The limited capacity on their on-premises infrastructure was an important factor that made the footprint of chemical manufacturing inside the cloud increase by 83%. 


To better understand the impact of security inside the cloud I would like to share with you the following story:

Veeam—Customer records compromised by unprotected database Near the end of August 2018, the Shodan search engine indexed an Amazon-hosted IP. Bob Diachenko, director of cyber risk research at Hacken.io, came across the IP on 5 September and quickly determined that the IP resolved to a database left unprotected by the lack of a password. 
The exposed database contained 200 gigabytes worth of data belonging to Veeam, a backup and data recovery company. Among that data were customer records including names, email addresses and some IP addresses. How encryption may become a factor in scenarios like this: User names and passwords are a relatively weak way of securing private access. Plus, if an organisation does not maintain complete control of the private keys that govern access for internal systems, attackers have a better chance of gaining access. Impact: Within three hours of learning about the exposure, Veeam took the server offline. The company also reassured TechCrunch that it would "conduct a deeper investigation and… take appropriate actions based on our findings.

Security Angles
Cloud security is not only important, but also it is complex because it covers multiple aspects of an IT product. Many times we are forcing our effort to secure the application. Still, we forget that our Development teams are pushing secrets in the main repository in cleartext.  Or the machines used by DevOps and Development is not secure enough, making them a perfect jump-box for an attack that could expose cloud credentials and the customer data in the end. 

When we talk about cloud security, we need to consider authentication, authorisation, access control. log management, operation audit and all other stuff that is related to cloud governance. There are 4 types of protections that we need to take into account:

The first 2 of them, Cloud Platform and Cloud Product Protection, are offered our of the shelve by cloud vendors. The only thing we need to do is to ensure that we activate encryption and do the proper configuration for data recovery. On top of that, we have the Data and Environment Protection, where the cloud vendor and service provider provide a lot of functionality. Our responsibility is to ensure that we are doing the correct configuration of the services to meet our security expectations.

What can we do? 
There are a lot of materials and resources that provide input related to what can we do to build more secure systems. In the next part of the article, I cover the most important things that we need to consider to improve the overall cloud security of our systems with a direct impact on the organisation. 
Security is not only about infrastructure, software and IT. It's about people, processes and education. 

(1) Limit the use of the cloud services that are not in GA (General Availability)
CSP vendors like AWS, Azure, and GCP provide services and features in different development states. For example, a service or feature provided by Microsoft Azure can be in one of the following 3 states:
  • Private Preview / NO SLA, NO formal support
  • Public Preview / NO SLA, limited formal support
  • General Availability / SLA and formal support
A clear statement from Microsoft says that only services in GA should be used for production environments or when sensitive data is stored. This is not only related to the SLA and support. Sometimes, the services are not ready for production, and hidden issues (e.g. security ones). The best example is the Cosmos DB Notebook integration security issue from August 2021. The root cause was a new functionality integrated into CosmosDB to be able to use Jupiter Notebooks and was caused by a Public Preview feature. People rush to use it in production environments, but they forget that only GA features should be used when we use sensitive data.
All services that are in Private or Public Preview are marked with a specific tag. Sometimes the timeline related to GA is available and can be used to align with your system lifecycle. When the timeline is not available, you should be careful how you use the preview because sometimes service can be preview for extended period. 
You can find below two examples of how you can identify a service or a functionality that is in preview.

(2) Extend team skills
Working on the cloud is not only DevOps work. In most cases, people assume that DevOps teams are super-heroes, able to do the data security, DR/BC planning, governance, IAM (Identity and Access Management), compliance, availability and many more. 
A complete team needs to include, besides DevOps, SMI that can cover the security, infrastructure, and cloud design aspects.  IaC, configuration and automation are implemented by DevOps people, but the design of the infrastructure, security and so on needs to be done by other SMIs.
The main motivations to not involve other SMI's in the team are costs and wrong assumptions. From the cost point of view, people forget that you don't need a security or infrastructure SMI allocated 100%. Just a few days of consultancy from such an SMI are more than enough. Also, the assumption that DevOps can cover all other tech topics that are not related to development is also wrong. 
Ensuring that you involve the right people in the design and development phases is crucial for the success of the project.

(3) Education
The impact of a cloud adoption program does not stop at the IT department. All the departments of the organisation are impacted, from HR and Financial to Legal and Suppliers. Internal processes and procedures are transformed to align with the new ways of running the business workloads. 

A learning program should be defined at the company level to ensure that each group of people is aware of how they will be impacted and the implications of having a system running outside their organisation. Cloud Vendors like Microsoft are offering cloud learning paths for non-tech stuff used in the learning programs. 
A Cloud Center of Excellence needs to be established that designs clear boundaries and govern how the cloud is integrated and used by the organisation. For example, what cloud services are approved to be used, how and in which situations. 

(4) Leverage SaaS and PaaS services
Cloud migration is easily done when relying on IaaS services (VM, bulk storage). It comes with a cost. The cost of managing and operating the IaaS platform, together with the risk of exposing data unsecured to the public internet. 
These risks are reduced when PaaS and SaaS services are adopted. The cloud provider (e.g. Azure) is responsible for the security updates of the services. Part of the responsibility is moved from the organisation to the cloud provider. In this way, the team has more time to focus on the business and less on the infrastructure.

(5) Private networks and internal endpoints
It is not required to build a system that is publicly available from the internet. You can have it built inside a private network, hidden from the internet. All the cloud services you use can be available only from internal endpoints, which are not accessible from the public internet. You would build a system similar to the one that you have on your own infrastructure that is closed by design from the public internet. 

(6) Identity and Access Management
Database or service access using a master key, token or a master user should never be used inside Azure or any other cloud provider. Using AWS IAM and Azure RBAC, you can control and manage the access of users and other services to resources based on user role. You become 'password less', and the access management between services is done based on their role and scope, without a hardcoded key or token.
In the above example, we see how Azure RBAC can be used to divine a Development Group that has limited rights for a specific group of services. It is important to mention that the Security Principal can be a User, a Group or a Service Principal (represented by an Azure VM, an Azure Function or any other Azure Services).  
The top 3 things that you need to be aware of are:
  • Duty segregation within your team. Grant only the amount of access to users that they need to perform their jobs
  • Specific permissions create unneeded complexity and confusion, accumulating into a "legacy" configuration that's difficult to fix without fear of breaking something.
  • Avoid resource-specific permissions. Instead, use management groups for enterprise-wide permissions and resource groups for permissions within subscriptions. Avoid user-specific permissions. Instead, assign access to groups in Azure AD

(7) Secrets Scanning
We can protect our code, secrets, and identity using tools that scan the repository automatically and ensure no secrets are pushed to them. We can integrate solutions like 'git secrets' to automatically scan a commit before a push and drop the push if secrets are found in the commit. 
Different strategies can be defined in the pipelines to fail a build or even remove the secrets automatically from the repository.

(8) Public endpoints protection
The public endpoints that the solution expose can be protected by a cloud gateway. AWS WAF or Azure Application Gateway are two good examples. Azure Application Gateway is an L7 Load Balancer that has a Web App Firewall integrated (WAF). This enables us to detect and stop and attach before reaching our workloads.  Azure WAF can identify and stop OWASP top 10 attacks out of the shelve. You don't need to do any kind of special configuration at the application layer. You have the protection offered as a SaaS. 

(9) Tracking and Monitoring
You should never rely only on the application logs. You should never do tracking and monitoring the same way as you would do on an on-premises system. You need to consider a set of items when you adopt the cloud, and you need to ensure that you collect the correct information.
The audit, logs, and metrics you collect should be from all 5 layers (Tenant, Subscription, Resource, OS, Application). Besides this, you need to have a deep application monitoring system like Azure Application Insight and Azure Security Center that allows you to track and discover what is happening behind the scene from all perspectives. 
On the other side, you have a set of services like Service Map, Network Monitoring and Log Analytics and enables us to do the deep infrastructure monitoring in a similar way that we used to do on our own network.

There are 3 components that we need to consider and can be seen in the below image.

(10) Integrate the build-in security systems 
Rely less on customer dashboard or on-premises security systems. The native solution, like Azure Security Center, provides us with the ability to infrastructure compliance checks (e.g. HIPPA, PCI-DSS), the inventory of our cloud and on-premises resources, security and thread alerts and protection and automatically secure scoring for our systems. 

Final thoughts
Each cloud vendor provides security best practices, procedures, and recommendations part of WAF (Well Architecture Framework). The WAF framework needs to be known by the team before designing and implementing the solution. 

Comments

Popular posts from this blog

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see