Skip to main content

Managing cloud training environments inside Azure and AWS


Providing learning and training cloud environments where engineers can play around it’s not so cheap as you might think. One of the most expensive resources are the computation ones likes VMs that are used not only to play with the cloud itself but also to host training resources like a specific CMS.
In general, the training environments are not required to run 24/7. In most of the cases they are used only during the working hours 8/5. Additional to this it is not required for them to run the full working hours. They can be spin-up only when there are persons that want to use them, otherwise, there is no need to run them.
By reducing the running of VMs from 24/7 to 8/5 you automatically save around 60% of VMs cost during the year. For example, for a training environment that initially was estimated to around $4000/year, the cost was reduced to $2500 by only reducing the running time from 24/7 to 8/5.

Startup/Shutdown automatically

Microsoft Azure - Runbooks can be used inside Azure to do automation. You can define a runbook that starts a VMs under a resource group automatically at a specific hour and shut down in the evening. If you can find more check the official Azure documentation - https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management

AWS – The official solution provides by Amazon it is based on AWS Functions, Amazon DynamoDB and CloudWatch. Inside AWS DynamoDB the schedule is stored and the action of startup and shut down of VMs is done by Azure Function that it is triggered by AWS CloudWatch. The CloudFormation template to implement this solution can be found here - https://aws.amazon.com/solutions/instance-scheduler/

Start Manually / Shutdown automatically
In the above solutions, the biggest problem is that your VMs are not used all the time. Because of this, you might have days when the VMs are available, but nobody is using them. For these situations, you can define using the automation solution presented above only the shutdown implementation. The start of the VMs is done manually by each user when he needs the resources.
You might want to have some VMs start automatically in the morning and the rest of them would start only when needed.

Dev/Test Labs Environments
When things are becoming more complicated, and you need to do define access policies, have higher control of resources and want kind of actions can be done by each user you can start to use solutions like Azure DevTest  Labs that enables developers to manage their VMs by themselves. At the same time at company level, you can define policies related to what kind of VMs can be spin-up, the no. of them, the time when all resources are shut down in the afternoon and many more.

Conclusion
I prefer the option no. 1 or 2 where you have automatical systems that can spin-up or shut-down the computation resources. There is enough flexibility to let the engineers defining their schedule. At the same time, you have the policy layer on top of resources that enable you to control the costs.
When you are in these situations take into account the following aspects:
  1. -          Maximum no. of VMs that can be created
  2. -          Who has the rights to create new VMs
  3. -          Who has the rights to start a VMs
  4. -          Who can change the tier side of the VMs
  5. -          Is a shut-down automatic mechanism in place?
  6. -          Who can configure the shut-down mechanism?


Comments

Popular posts from this blog

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...

Cloud Myths: Migrating to the cloud is quick and easy (Pill 2 of 5 / Cloud Pills)

The idea that migration to the cloud is simple, straightforward and rapid is a wrong assumption. It’s a common misconception of business stakeholders that generates delays, budget overruns and technical dept. A migration requires laborious planning, technical expertise and a rigorous process.  Migrations, especially cloud migrations, are not one-size-fits-all journeys. One of the most critical steps is under evaluation, under budget and under consideration. The evaluation phase, where existing infrastructure, applications, database, network and the end-to-end estate are evaluated and mapped to a cloud strategy, is crucial to ensure the success of cloud migration. Additional factors such as security, compliance, and system dependencies increase the complexity of cloud migration.  A misconception regarding lift-and-shits is that they are fast and cheap. Moving applications to the cloud without changes does not provide the capability to optimise costs and performance, leading to ...