Cloud optimization is an ongoing task for any organization driven by data. If you don’t believe you need to optimize, or you’re already optimized, you may not have the data necessary to see where you’re over-provisioned and losing spend. Revisit the optimization pillars frequently to best evolve with and take advantage of everything the cloud has to offer.
Begin with the end in mind
The big question is, where are you trying to go? This question should constantly be revisited with internal stakeholders and business leaders. Define the process that will get you there and follow the order of operations identified to reach your optimization goal. Losing sight of the purpose, getting caught up in shiny new tools, or failing to incorporate the right teams could lead you off path.
Empower someone to drive the process
This is pivotal because without this appointed person, cloud optimization will not happen. Give someone the power to drive optimization policies throughout the organization. Companies most successful in achieving optimization have a good internal mandate to make it a priority. When messages come from the top, and are enforced through a project champion, people tend to pay attention and management is much more effective.
Fill the data gaps
Cloud optimization is a data driven exercise, so you need all the data you can get to make it valuable. Your tools will be much more compelling when they have the data necessary to make smart recommendations. Understand where to get the data in your organization, and figure out how to get any data you don’t have. Verify your data regularly to confirm accuracy for intelligent decision making geared toward optimization.
Implement tagging practices
The practice of not only implementing, but also actively enforcing your tagging policies, drives optimization. Be it an environment tag, owner tag, or application tag, tags help you understand your data and what or who is driving spend.
While lack of tagging and data gaps prevent visibility, overprovisioning is also an accountability issue. Just look at the hundred plus AWS services alone that show up on a bill for an organization that’s a long-time user. It’s not uncommon for 20-30% of the total to be attributed to services they never even knew existed at the time they migrated to the cloud.
Hold your app teams accountable with an internal mechanism that lets the data speak for itself. It can be as simple as a dashboard with tagging grading, because everybody understands those results.
Rearchitect and refactor
Migrating to the cloud via a lift and shift can be a valuable strategy for certain organizations. However, after a few months in the cloud, you need to intentionally move forward with the next steps. Reevaluating, refactoring and rearchitecting will occur multiple times along the way. Without them, you end up spending more money than necessary.
Continuous optimization is a must
Optimization is not a one and done project because the possibilities are constantly evolving. Almost every day, a new technology is introduced. Maybe it’s a new instance family or tool. A couple years ago it was containers, and before that it was serverless. Being aware of these new and improved technologies is key to maintaining continuous optimization.
Engage with an experienced partner
There are a lot of factors to consider, evaluate, and complete as part of your cloud optimization practice. To maximize your optimization efforts, you want someone experienced to guide your strategy.
One benefit to partnering with an optimization expert, like 2nd Watch, is that an external partner can diffuse the internal conflicts typically associated with optimization. So much of the process is navigating internal politics and red tape. A partner helps meld the multiple layers of your business with a holistic approach that ensures your cloud is running as efficiently as possible.
-Willy Sennott, Optimization Practice Manager
Optimizing your cloud is essential for maximizing budgets, centralizing business units, making informed decisions, and driving performance. Regardless of whether you’re already in the cloud or you’re just beginning to consider migrating, you need to be aware of the challenges to optimization in order to avoid or overcome them and reach your optimization goals.
The most pervasive challenge of optimization in the cloud is the complexity of the task. Regardless of the cloud platform – AWS, Azure, Google Cloud, or a hybrid cloud strategy – the intricacies are constantly evolving and changing. Trying to stay on top of that as an individual business requires a good amount of time, resources, and effort. Adding new tools and processes to your cloud requires integration, stakeholder agreement, data mining, analysis, and maintenance. While the potential outcomes from optimization are business-changing, it’s an ongoing process with many moving parts.
Standardized governance frameworks bring decentralized business units and disparate stakeholders together to accomplish business-wide objectives. Shared responsibility, from central IT to individual app teams, prevents the costly consequences of overprovisioning. While many organizations are knowingly overprovisioned, they can’t seem to solve the problem. Part of the issue is simply a lack of overall governance.
Cloud optimization is a data driven exercise. If it’s not data driven, it’s not scalable. You need to maximize your data by knowing what data you have, where it is, and how to access it. Also important is knowing what data is missing. Many organizations believe they have complete metrics, but they’re not capturing and monitoring memory, which is a huge piece of the puzzle. In fact, memory is one of the most constrained points of data across organizations.
Incredibly important within data discovery and data mapping is gaining visibility through tagging. Without an enforced and uniform tagging strategy as part of your governance structure, spend can increase without accounting for it. Tags provide insight into your cloud economics, letting you know who is spending what, what are they spending it on, and how much are they spending. It’s not uncommon to see larger organizations with a number of individual linked accounts and no one knows who they belong to. We’ve even found, after some digging, that the owners of those accounts haven’t been with the company for months! To get the cost saving benefits from cloud optimization, you need visibility throughout the process.
5. Technical expertise
You need a certain level of technical expertise and intuition to take advantage of all the ways you can optimize your cloud. Too often, techs aren’t necessarily thinking about optimization, but rather make decisions based on other performance or technical aspects. Without optimization at the forefront of these deterministic behaviors, the business drivers may not perform as expected. Partner with data scientists and architects to map connections between data, workloads, resources, financial mechanisms, and your cloud optimization goals.
Tools are part of the solution, but not the entire solution.
While tools can help with your cloud optimization process, they can’t solve these common challenges alone. Tools just don’t have the capability to solve your data gaps. In fact, one foundational issue with tools is the specific algorithms used to generate recommendations. Regardless of whether or not the tool has complete data, it will still make the same recommendations, thereby creating confusion and introducing new risks.
It takes work to get the best results. Someone has to first be able to deduce the information provided by your tools, then put it into context for the various decision makers and stakeholders, and finally, your application owners and businesses teams have to architect the optimization correctly to be able to take advantage of the savings.
In choosing the right tools to aid your optimization, be aware of ‘tool champions’ who create internal noise around decision making. New tools are launched almost daily, and different stakeholders are going to champion different tools.
Once you find a tool, stick with it. Give it a chance to fully integrate with your cloud, provide training, and encourage adoption for best results. The longer it’s a part of your infrastructure, the more it will be able to aid in optimization.
2nd Watch takes a holistic approach to cloud optimization from strategy and planning, to cost optimization, forecasting, modeling and analytics. Download our eBook to learn more about adopting a holistic approach to cloud cost optimization.
-Willy Sennott, Optimization Practice Manager
Hybrid cloud strategies require a fair amount of effort and knowledge to construct, including for infrastructure, orchestration, application, data migration, IT management, and potential issues related to silos. There are a number of complexities to consider to enable seamless integration of a well-constructed hybrid cloud strategy. We recommend employing these 5 best practices as you move toward a multi-cloud or hybrid cloud architecture to ensure a successful transition.
Utilize cloud management tools.
Cloud management providers have responded to the complexities of a hybrid strategy with an explosion of cloud management tools. These tools can look at your automation and governance, lifecycle management, usability, access and more, and perform many tasks with more visibility.
Unique tooling for each cloud provider is especially important. Some partners may recommend a single pane of glass for simplicity, but that can be too simple for service catalogues and when launching new resources. The risk with going too simplistic is missing the opportunity to take advantage of the best aspects of each cloud.
Complete a full assessment of applications and dependencies first.
Before you jump into a hybrid cloud strategy, you need to start with a full assessment of your applications and dependencies. A common misstep is moving applications to the public cloud, while keeping your database in your private cloud or on-prem datacenter. The result is net latency drag, leading to problems like slow page loads and videos that won’t play.
Mapping applications and dependencies to the right cloud resource prior to migration gives you the insight necessary for a complete migration with uninterrupted performance. Based on the mapping, you know what to migrate when, with full visibility into what will be impacted by each. This initial step will also help with cloud implementation and hybrid connect down the line.
Put things in the right place.
This might sound obvious, but it can be challenging to rationalize where to put all your data in a hybrid environment. Start by using the analysis of your applications and dependencies discussed above. The mapping provides insight into traffic flows, networking information, and the different types of data you’re dealing with.
A multi-cloud environment is even more complex with cost implications and networking components. On-prem skills related to wide area network (WAN) connectivity are still necessary as you consider how to monitor the traffic – ingress, egress, east, and west.
Silos can be found in all shapes and sizes in an organization, but one major area for silos is in your data. Data is one of the biggest obstacles to moving to the cloud because of the cost of moving it in and out and accessing it. The amount of data you have impacts your migration strategy significantly, so it’s critical to have a clear understanding of where data may be siloed.
Every department has their own data, and all of it must be accounted for prior to migrating. Some data silo issues can be resolved with data lakes and data platforms, but once you realize silos exist, there’s an opportunity to break them down throughout the organization.
An effective method to breaking down silos is by getting buy-in from organizational leaders to break the cultural patterns creating silos in the first place. Create a Cloud Center of Excellence (CCoE) during your cloud transformation to understand and address challenges within the context of the hybrid strategy across the organization.
Partner with proven experts.
Many companies have been successful in their hybrid cloud implementation by leveraging a partner for some of the migration, while their own experts manage their internal resources. With a partner by your side, you don’t have to invest in the initial training of your staff all at once. Instead, your teams can integrate those new capabilities and skills as they start to work with the cloud services, which typically increases retention, reduces training time, and increases productivity.
Partners will also have the knowledge necessary to make sure you not only plan but implement and manage the hybrid architecture for overall efficiency. When choosing a partner, make sure they’ve proven the value they can bring. For instance, 2nd Watch is one of only five VMware Cloud on AWS Master Services Competency holders in the United States. That means we have the verified experience to understand the complexities of running a hybrid VMware Cloud implementation.
If you’re interested in learning more about the hybrid cloud consulting and management solutions provided by 2nd Watch, Contact Us to take the next step in your cloud journey.
-Dusty Simoni, Sr Product Manager, Hybrid Cloud
Surprisingly, AWS has very quietly released a major enhancement/overhaul to purchasing compute resources up front. To date, purchasing Reserved Instances (Standard or Convertible) has offered AWS users great savings for their static workloads. This works because static workloads tend to utilize a set number of resources and RIs are paid for in advance, thereby justifying the financial commitment. That said, how often do today’s business needs remain constant, particularly with today’s product development? So, until now, you had two choices if you couldn’t use your RIs: take the loss and let the RI term run out or undertake the hassle of selling it on the marketplace (potentially for a loss). AWS Savings Plans, on the other hand, provide a gigantic leap forward in solving this problem. In fact, you will find that these AWS Savings Plans provide far more flexibility and return for your investment than the standard RI model.
Here is the gist of the AWS Savings Plans program, taken from the AWS site:
AWS Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate usage.
AWS Savings Plans offer significant savings over On Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one- or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer. (Jeff Barr, AWS, 11.06.2019)
This is HUGE for AWS clients, because now, for the first time ever, savings can also be applied to workloads that leverage serverless containers—as well as traditional EC2 instances!
Currently there are two AWS Savings Plans, and here’s how they compare:
|EC2 Instance Savings Plan
||Compute Savings Plan
|Offers discount levels up to 72% off on-demand rates (same as RIs).
||Offers discount levels up to 66% off on-demand rates (the same rate as Convertible RIs).
|Any changes in instances are restricted to the same AWS region.
||Spans regions. This could be a huge draw for companies with need for regional or national coverage.
|Restricts EC2 instance types to the same family, but allows change in instance size and OS (e.g., t3.medium to t3.2xlarge).
||More flexible. Does not limit EC2 instance families or OS, and therefore, you are no longer locked into a specific instance family at the moment of purchase, as you would be with a traditional RI.
|EC2 instances only: Similar to convertible RIs, this plan allows you to increase instance size, with a new twist: you can also reduce instance size! Yes, this means you may no longer have to sell your unused RIs on the marketplace!
||Allows clients to mix-and-match AWS products, such as EC2 and Fargate; extremely beneficial for clients who use a range of environments for their workloads.
|BOTTOM LINE: Slightly less flexible, but you garner a greater discount.
||BOTTOM LINE: More flexible, but with less of a discount.
As with standard RI purchases, understanding your workloads will be key to determining when to use AWS Savings Plans vs. standard RIs (RIs aren’t going anywhere, but we recommend that Savings Plans be used in place of RIs moving forward) vs. On-Demand (including analysis of potential savings from auto-parking, seasonality, elasticity, and so on). Sound a bit overwhelming? Fear not! This is where 2nd Watch’s Cloud Optimization service excels! Enrollment starts with a full analysis of your organization’s usage, AWS environment, and any other requirements/restrictions your organization may have. The final result is a detailed report, expertly determined by our AWS-certified optimization engineers, with our savings findings and recommendations—customized just for you!
Due to the nature of AWS Savings Plans, they will bring the most immediate value to clients who are either new to AWS or don’t have any RI commitments currently on their account. This is due to the fact that AWS Savings Plans cannot, unfortunately, replace existing RI purchases. Whatever your goals, our optimization experts are ready to help you plan the most strategically efficient and cost effective “next step” of your cloud transformation.
And that’s just the beginning
If you think that AWS Savings Plans may benefit your new or existing AWS deployment, contact us to jumpstart an analysis.
-Jeff Collins, Cloud Optimization Product Management
In my last blog post, I covered the basics of cloud cost optimization using the Six Pillars model, and focused on the ‘hows’ of optimization and the ‘whys’ of its importance. In this blog, I’d like to talk about what comes next: preparing your organization for your optimization project. The main reason most clients delay and/or avoid confronting issues regarding cloud optimization is because it’s incredibly complex. Challenges from cloud sprawl to misaligned corporate priorities can cause a project to come to a screeching halt. Understanding the challenges before you begin is essential to getting off on the right foot. Here are the 5 main challenges we’ve seen when implementing a cloud cost optimization project:
- Cloud sprawl refers to the unrestricted, unregulated creation and use of cloud resources; cloud cost sprawl, therefore, refers to the costs incurred related to the use of each and every cloud resource (i.e., storage, instances, data transfer, etc.). This typically presents as decentralized account or subscription management.
- Billing complexity, in this case, specifically refers to the ever-changing and variable billing practices of cloud providers and the invoices they provide you. Considering all possible variable configurations when creating many solutions across an organization, Amazon Web Services (AWS) alone has 500,000 plus SKUs you could see on any single invoice. If you cannot make sense of your bill up front, your cost optimization efforts will languish.
- Lack of Access to Data and Application Metrics is one of the biggest barriers to entry. Cost optimization is a data driven exercise. Without billing data and application metrics over time, many incorrect assumptions end up being made resulting in higher cost.
- Misaligned policies and methods can be the obstacle that will make or break your optimization project. When every team, organization or department has their own method for managing cloud resources and spend, the solution becomes more organizational change and less technology implementation. This can be difficult to get a handle on, especially if the teams aren’t on the same page with needing to optimize.
- A lack of incentives may seem surprising to many, after all who doesn’t want to save money, however it is the number one blocker in large enterprises that we have experienced toward achieving optimization end goals. Central IT is laser focused on cost management and application/business units are focused more on speed and innovation. Both goals are important, but without the right incentives, process, and communication this fails every time. Building executive support to directly reapply realized optimization savings back to the business units to increase their application and innovation budgets is the only way to bridge misaligned priorities and build the foundation for lasting optimization motivation.
According to many cloud software vendors, waste accounts for 30% to 40% of all cloud usage. In the RightScale State of the Cloud Report 2019, a survey revealed that 35% of cloud spend is wasted. 2nd Watch has found that within large enterprise companies, there can be up to 70% savings through a combination of software and services. It often starts by just implementing a solid cost optimization methodology.
When working on a project for cloud cost optimization, it’s essential to first get the key stakeholders of an organization to agree to the benefits of optimizing your cloud spend. Once the executive team is onboard and an owner is assigned, the path to optimization is clear covering each of the 6 Pillars of Optimization.
THE PATH TO OPTIMIZATION
STEP ONE – Scope It Out!
As with any project, you first want to identify the goals and scope and then uncover the current state environment. Here are a few questions to ask to scope out your work:
- Overall Project Goal – Are you focused on cost savings, workload optimization, uptime, performance or a combination of these factors?
- Budget – Do you want to sync to a fiscal budget? What is the cycle? What budget do you have for upfront payments? Do you budget at an account level or organization level?
- Current State – What number of instances and accounts do you have? What types of agreements do you have with your cloud provider(s)?
- Growth – Do you grow seasonally, or do you have planned growth based on projects? Do you anticipate existing workloads to grow or shrink overtime?
- Measurement – How do you currently view your cloud bill? Do you have detailed billing enabled? Do you have performance metrics over time for your applications?
- Support – Do you have owners for each application? Are people available to assess each app? Are you able to shutdown apps during off hours? Do you have resources to modernize applications?
STEP TWO – Get Your Org Excited
One of the big barriers to a true optimization is gaining access to data. In order to gather the data (step 3) you first need to get the team onboard to grant you or the optimization project team access to the information.
During this step, get your cross-functional team excited about the project, share the goals and current state info you gathered in the previous step and present your strategy to all your stakeholders.
Stakeholders may include application owners, cloud account owners, IT Ops, IT security and/or developers who will have to make changes to applications.
Remember, data is key here, so find the people who own the data. Those who are monitoring applications or own the accounts are the typical stakeholders to involve. Then share with them the goals and bring them along this journey.
STEP THREE – Gather Your Data
Data is grouped into a few buckets:
- Billing Data – Get a clear view of your cloud bill over time.
- Metrics Data – CPU, I/O, Bandwidth and Memory for each application over time is essential.
- Application Data – Conduct interviews of application owners to understand the nuances. Graph out risk tolerance, growth potential, budget constraints and identify the current tagging strategy.
A month’s worth of data is good, though three months of data is much better to understand the capacity variances for applications and how to project into the future.
STEP FOUR – Visualize and Assess Your Usage
This step takes a bit of skill. There are tools like CloudHealth that can help you understand your cost and usage in cloud. Then there are other tools that can help you understand your application performance over time. Using the data from each of these sources and collaborating them across the pillars of optimization is essential to understanding where you can find the optimal cost savings.
I often recommend bringing in an optimization expert for this step. Someone with a data science, cloud and accounting background can help you visualize data and find the best options for optimization.
STEP FIVE – Plan Your Remediation Efforts and Get to Work!
Now that you know where you can save, take that information and build out a remediation plan. This should include addressing workloads in one or more of the pillars.
For example, you may shut down resources at night for an application and move it to another family of instances/VMs based on current pricing.
Your remediation should include changes by application as well as:
- RI Purchase Strategy across the business on a 1 or 3-year plan.
- Auto-Parking Implementation to part your resources when they’re not in use.
- Right-Sizing based on CPU, memory, I/O.
- Family Refresh or movement to the newer, more cost-effective instance families or VM-series.
- Elimination of Waste like unutilized instances, unattached volumes, idle load balancers, etc.
- Storage reassessment based on size, data transfer, retrieval time and number of retrieval requests.
- Tagging Strategy to track each instance/VM and track it back to the right resources.
- IT Chargeback process and systems to manage the process.
Remediation can take anywhere from one month to a year’s time based on organization size and the support of application teams to make necessary changes.
Download our ‘5 Steps to Cloud Cost Optimization’ infographic for a summary of this process.
With as much as 70% savings possible after implementing one of these projects, you can see the compelling reason to start. A big part of the benefits is organizational and long lasting including:
- Visibility to make the right cloud spending decisions
- Break-down of your cloud costs by business area for chargeback or showback
- Control of cloud costs while maintaining or increasing application performance
- Improved organizational standards to keep optimizing costs over time
- Identification of short and long-term cost savings across the various optimization pillars:
Many companies reallocate the savings to innovative projects to help their company grow. The outcome of a well-managed cloud cost optimization project can propel your organization into a focus on cloud-native architecture and application refactoring.
Though complex, cloud cost optimization is an achievable goal. By cross-referencing the 6 pillars of optimization with your organizations policies, applications and teams, you can quickly find savings from 30 – 40% and grow from there.
By addressing project risks like lack of awareness, decentralized account management, lack of access to data and metrics, and lack of clear goals, your team can quickly achieve savings.
Ready to get started with your cloud cost optimization? Schedule a Cloud Cost Optimization Discovery Session for a free 2-hour session with our team of experts.
-Stefana Muller, Sr Product Manager
Let me start by painting the picture: You’re the CFO. Or the manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources. Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.
The promise of moving to cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.
Source: Amazon Web Services (AWS). “Understanding Consolidated Bills – AWS Billing and Cost Management”. (2017). Retrieved from https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/con-bill-blended-rates.html
From Reserved Instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimize your spend.
It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.
Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimization. This where 2nd Watch and our Six Pillars of Cloud Optimization come in.
The Six Pillars of Cloud Cost Optimization
- Reserved Instances (RIs)
AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.
Most cloud cost optimizations, erroneously, begin and end here—providing you and your organization with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through 3rd party optimization tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.
Two of the major factors to consider with cloud cost optimization are Risk Tolerance and Centralized RI Management portfolios.
- Risk Tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organization take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings.
- Centralized RI Management portfolios allow for deeper RI coverage across organizational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralized, whole organization approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program.
Once you identify your risk tolerance and centralize your approach to RI’s you can take advantage of this optimization option. Though, an RI-only optimization strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the 5 other optimization pillars to achieve the most effective cloud cost optimization.
One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.
Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyze resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.
Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.
Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimization pillars?
- Family Refresh
Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.
Up-to-date knowledge of the instance types/families being used within your organization is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.
Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimization. As a result, for many organizations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimization service offering.
Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilized instances. Some examples of waste in the cloud include:
- AWS RDSs or Azure SQL DBs without a connection
- Unutilized AWS EC2s
- Azure VMs that were spun up for training or testing
- Dated snapshots that are holding storage space that will never be useful
- Idle load balancers
- Unattached volumes
Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organizational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.
Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:
- Size – How much storage do you need?
- Data Transfer (bandwidth) – How often does your data need to move from one location to another?
- Retrieval Time – How quickly do you need to access your data?
- Retrieval Requests – How often do you need to access your data?
There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.
So, you see, each of these six pillars of cloud cost optimization houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimizing only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimization process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.
The good news is that the future is bright! Once you have completely assessed your current environment, taken the pillars into account, made the changes required to optimize your cloud, and found a method by which to make this process continuous, you can investigate optimization through application refactoring, ephemeral instances, spot instances and serverless architecture.
The promised cost savings of public cloud is reachable, if only you know where to look.
2nd Watch offers a Cloud Cost Optimization service that can help guide you through this process. Our Cloud Cost Optimization service is guaranteed to reduce your cloud computing costs by 20%,* increasing efficiency and performance. Our proven methodology empowers you to make data driven decisions in context, not relying on tools alone. Cloud cost optimization doesn’t have to be time consuming and challenging. Start your cloud cost optimization plan with our proven method for success at https://offers.2ndwatch.com/download-cloud-cost-optimization-datasheet
*To qualify for guaranteed 20% savings, must have at least $50,000/month cloud usage.
-Stefana Muller, Sr. Product Manager