For some time, the fashionable today term “Sustainable” began to creep into different areas of our lives. Generally it is an approach that focuses on aspects such as the environment and capital and economics.
How much do you pay for electricity in your server room? To what extent do you use your infrastructure and is it an optimal level? How much does the work associated with maintaining it cost you? What does your IT deal with every day and to what extent does their work grow your business?
Probably some of these questions you don't know the answers at all and it didn't occur to you to wonder about them.
At first, when the first “earthlings” made attempts to estimate how much it would cost to use the cloud, they didn't take some of the costs into account either, because and for what? After all, it's the cloud provider who pays for it (or at least for some of that stuff).
However, the cloud provider unlike you very accurately counts all these items like current, space, maintenance, etc. And what does it do for? To finally include these costs in your account for using cloud services.
And here came the disappointment that the cloud could be expensive. And it will be expensive when you try to move your past habits to the cloud.
You pay for what you use
It was the first big surprise when suddenly every month you have to pay for everything that started, whether we use it or not. And where is the sustainable use here?
This only comes up when you start paying more attention to how many resources you really need and how much you currently have running them. For example, if your developers aren't working 24/h (and I'm assuming they don't) then why not disable those resources then when they're not in use.
You need to take it and use it, you don't need to give it away, it's so simple and obvious.
The conclusion “put out the light at the end of the day where you are not there”. Use as much as you are necessary at any given time. This will benefit the environment and your finances.
All the more because the cloud has solutions that help you manage resources more wisely, such as auto-scaling of resources, cost monitoring, automation, etc.
In addition, migrating or creating some cloud solution, there are several places to stay and think a little deeper.
Sustainable cloud architecture
When you have the pleasure of designing the architecture of a new solution, it's the best time to think about sustainable cloud use again.
Selection of services for specific needs and how to use them also affects the aspect of how much power/resources you need and how much you will end up paying for it.
Sometimes these possibilities and variants can be several, everyone who has checked how many ways you can run containers in the cloud. However, the idea is to finally achieve the shape of architecture, which, with a sufficient level of resources and costs, would maximally achieve the expected effect.
For example, just look at how many ways (data storage classes) we can maintain data on Amazon S3. For sure, putting 100TB data into the standard class, which we rarely use, is not the best option given, for example, cost.
I think it is when designing an application architecture that there are the most places where our decisions will affect:
The amount and type of resources we're going to need.
How much will we pay for all of this.
What environmental impact it will have (more resources, more electricity, heat, CO2, etc.)
All this requires, of course, a good knowledge of services and the cloud platform and the capabilities it provides. Then you can prepare such an architecture and choose such a service configuration that will allow you to achieve the intended effect taking care of economic (and, of course, environmental) factors.
Sustainable cloud operation
Another aspect worth considering is daily cloud work, tasks that are to do, both while keeping apps alive and calling new resources.
Here, of course, the architecture of the application is of great importance and what cloud services it is built from. Depending on the type of service (IaaS, PaaS, etc.), you'll have other things and tasks that you'll take care of on a daily basis.
For example, the decision to build a database cluster based on virtual machines won't be the best option when we look at how much work it takes to maintain it. By selecting a database in a service model (e.g., Amazon RDS), a large part of the configuration and maintenance work is on the side of the cloud provider.
The second important fact is that by using the services, we transfer responsibility to the provider for managing the disposal of infrastructure resources. For example, when running container applications on AWS Fargate, we don't need to deal with too much about efficiency and disposal of infrastructure. This is another advantage from a sustainability point of view, since working on shared resources and services, the provider is dedicated to making the most of the physical infrastructure.
So why waste time, energy and money doing something the supplier will definitely do better. You at this time can take care of other things that have a direct translating to business.
Another aspect is the configuration of the infrastructure. In once with the arrival of the cloud, the automation and overall configuration management aspect developed heavily in the Infrastructure as Code approach.
Proper preparation, development of deployment processes and automation mechanisms for repetitive tasks, is a huge saving of time and time money (not to mention the security aspects). Additionally, you can work out optimal configuration patterns that can meet some assumptions with respect to a sustainable approach.
You should also remember the cyclical viewing of resources and the extent of their disposal. As experience shows, it is almost always possible to find something that can be optimized. If you're entering the cloud world, then this is definitely the thing that needs to be put into the resource management process.
Here we come to one more very important aspect, namely the application we build and which will use our infrastructure and cloud architecture. For all of this to make sense, the application we are building/encoding, in a large nutshell, must also be aware of where it will be running and what cloud mechanisms it will use. Here you can help you such a password “Cloud Native Application”. A definition that determines what assumptions should be used by creating an application that will take advantage of the capabilities provided by the cloud.
For example, when automated scaling mechanisms of some of the components will be used in our architecture, the application cannot be indifferent to this, because it can cause various problems in its operation itself. Lack of this awareness can force us to change assumptions for architecture, the consequence of which may also be a lack of flexibility as well as increased costs.
The second thing, of course, is app code. How many times have you seen pilgrimages to the infrastructure department asking “add some CPU and RAM, because the app is slow to work...”. Of course, we can use this approach in the cloud as well. However, once the app starts to work better, it will reflect on our cloud bill at the end of the month. All of this has further implications for over-use of infrastructure, electricity and all that, with a sustainable approach, we should keep in mind.
Those things worth taking care of, of course, is much more. Data architecture is how we will reach them, in what format they will be stored, how we will create applications. Will it be microsites run in containers, or maybe we will opt for a “function as a service” approach. There are plenty of places where we can look for a better and more sustainable use of cloud resources.
Cloud providers offer various kinds of services, helping, for example, to optimize code or detect potential errors that can translate into increased resource demand. At AWS, for example, we have Amazon CodeGuru service, which is used to analyze for searching for suboptimal and excessive resource-intensive (and at the end of cost) pieces of code.
Why do it be worth doing?
In concluding this thread, I'd like to point out what I believe accompanies the cloud, actually from the start. Optimal and wise use of the cloud to meet the stated needs and objectives. As we look at cloud providers and their approach, from the outset they are accompanied by a sustainable use of resources and opportunities that technology provides in conjunction with nature and our skills.
Every large supplier invests heavily in renewable energy and already today a large part of their physical infrastructure is powered in this way.
By virtue that in most cases they themselves design and build components of this infrastructure (servers and other devices), they have a lot of freedom in that to look for the most optimal use of resources (energy, cooling, space, etc.). Because at their scale of business improving something by a few percentage points, results in huge numbers across the scale.
So, since they are trying so hard, it is worth us, too, to do our best to use resources wisely. Providers, of course, try to help us with this by offering their knowledge and experience.
AWS recently expanded its Well-Architected Framework (a set of good cloud working practices) to include another pillar just related to Sustainability.
The idea is to show how our different decisions and approach to using cloud infrastructure can affect not only this ecological aspect, but also costs. A large portion of things revolves roughly around what I mentioned here as well.
Same as with security, we have here the so-called “Shared Responsibity Model” to show where from a balance point of view the provider responds and affects, and where we are.
So, if you're just now working on an architecture or configuration, or code for a solution, look around. Think not only about the cost your solution will generate, or business benefits. Also, pay attention to what imprint your solution will have on the environment and world you will leave to subsequent generations.
ARTICLE ORIGINALLY APPEARED IN THE PAGES OF THE BLOG DELOITTE
“Sustainable cloud Sustainable approach to resource utilization”