Streamlining AWS Cloud Spend for Innovation Investments

Cloud Spend 101: What is it, and why does it matter?

Cloud spend is the amount of money an organization spends in AWS and across all cloud platforms. A common belief is that moving to the cloud will significantly decrease your total cost of ownership (TCO) quickly, easily, and almost by default. Unfortunately, reaping the infrastructure cost savings of AWS is not that simple, but it certainly is obtainable. To achieve a lower TCO while simultaneously boosting productivity and gaining operational resilience, business agility, and sustainability, you must strategize your migration and growth within AWS.

The most common mistake made when migrating from on-prem environments to the cloud is going “like-for-like.” Basically, creating a cookie-cutter image of what existed on-prem in the new cloud environment. Because they are two completely different types of infrastructure, organizations end up way over-provisioned using unnecessary and expensive On-Demand Instance pricing.

Ideally, you want a well-developed game plan before migration starts to avoid losing money in the cloud. With the advice and support of a trusted cloud partner, a comprehensive strategy takes your organization from design to implementation to optimization. That puts you in the best position to stay on top of costs during every step of migration and once you’re established in AWS. Cost savings realized in the cloud can be reinvested in innovation that expands business value and grows your bottom line.

The 6 pillars of cloud spend optimization.

While it’s best to have a comprehensive strategy before migrating to the cloud, cloud spend optimization is an ongoing necessity in any cloud environment. With hundreds of thousands of different options for cloud services today, choosing the right tools is overwhelming and leaves much room for missteps. At the same time, there are also a lot of opportunities available. Regardless of where you are in your cloud journey, the six pillars of cloud spend optimization provide a framework for targeted interventions.

#1: Reserved Instances (RIs)

RIs deliver meaningful savings on Amazon EC2 costs compared to On-demand Instance pricing. RIs aren’t physical instances but a billing discount for using On-Demand Instances in your account. Pricing is based on the instance type, region, tenancy, and platform; term commitment; payment cadence; and offering class.

#2: Auto-Parking

A significant benefit of the cloud is scalability, but the other side of that is individual control. Often, an organization’s team members forget or are not prompted or incentivized to terminate resources when they aren’t being used. Auto-Parking schedules and automates the spin-up/spin-down process depending on hours of use to prevent paying for idle resources. This is an especially helpful tool for development and test environments.

#3: Right-Sizing

Making sure you have exactly what you need and nothing you don’t requires an analysis of resource consumption, chargebacks, auto-parked resources, and available RIs. Using those insights, organizations can implement policies and guardrails to reduce overprovisioning by tagging resources for department-level chargebacks and properly monitoring CPU, memory, and I/O (input/output).

#4: Family Refresh

Instance types, VM series, and Instance Families all describe the methods cloud providers use to package instances depending on the hardware. When instance types are retired and replaced with new technology, cloud pricing changes based on compute memory and storage parameters – this process is referred to as “Family Refresh.” Organizations must closely monitor instances and expected costs to manage these price fluctuations and prevent redundancies.

#5: Waste

Inherent in optimization is waste reduction. You need the checks and balances we’ve discussed to prevent unnecessary costs and reap the financial benefits of a cloud environment. Identifying waste and stopping the leaks takes time and regular, accurate reporting within each business unit. For example, when developers are testing, make sure they’re only spinning up new environments for a specific purpose. Once those environments are no longer used, they should be decamped to avoid waste.

#6: Storage

Storage catalyzes many organizations to move to the cloud because it’s a valuable way to reduce on-prem hardware spend. Again, to realize those savings, businesses must keep a watchful eye on what is being stored, why it’s being stored, and how much it will cost. There are typically four components impacting storage costs:

  1. Size – How much storage do you need?
  2. Data transfer (bandwidth) – How often does data move from one location to another?
  3. Retrieval time – How quickly do you need to access the data?
  4. Retrieval requests – How often do you need access to the data?

Depending on your answers to these questions, there are different ways to manage your environment using file storage, databases, data backup, and data archives. Organizations can estimate storage costs with a solid data lifecycle policy while right sizing and amplifying storage capacity and bandwidth.

Private Pricing Agreements

Another way to control your AWS spend is with a PPA or a Private Pricing Agreement – formally known as an EDP or Enterprise Discount Program. A PPA is a business-led pricing agreement with AWS that considers a specific term and commit amount. Organizations that are already in the cloud and love the service can use their expected growth over the next three or five years to get a discount for committing to that amount of time with AWS. In addition to expected compute services, reservations for reserved instances, and existing savings plans, the business also includes software purchases from the marketplace in the agreement to get further discounts.

Choosing a cloud optimization partner.

It’s easy to know what to do to control spend, but it’s a whole other beast to integrate cloud optimization into business initiatives and the culture of both IT teams and finance teams. Of course, you can go it alone if you have the internal cloud expertise required for optimization, but most businesses partner with an external cloud expert to avoid the expenses, risk, and time needed to see results. Attempting these strategies without an experienced partner can cost you more in the long run without achieving the ROI you expected.

In fact, when going it alone, businesses gain about 18% savings on average. While that may sound satisfying, companies that partner with the cloud experts at 2nd Watch average 40% savings on their compute expenses alone. How? We aim high, and so should you. Regardless of how you or your cloud optimization partner tackles cloud spend, target 90% or greater coverage in reserved instances and savings plans. In addition to the six pillars of optimization and PPAs, you or your partner also need to…

  • Know how to pick the right services and products for your business from the hundreds of thousands of options available.
  • Develop a comprehensive cloud strategy that goes beyond just optimizing cost.
  • Assess the overall infrastructure footprint to determine the effectiveness of serverless or containerization for higher efficiency.
  • Evaluate applications running on EC2 instances to identify opportunities for application modernization.

Take the next step in your cloud journey.

2nd Watch is a great choice for cloud spend optimization with AWS because we specialize in this area. With our extensive experience and in-depth knowledge of AWS services and pricing models, we can help you maximize your AWS investments. Our comprehensive solutions include cost analysis, budgeting, forecasting, and ongoing monitoring. We have a proven track record of delivering significant cost savings for our clients across various industries.

We leverage automation and advanced tools to identify cost-saving opportunities, eliminate waste, and optimize your AWS resources. This ensures efficiency and allows you to focus on innovation and growth. We provide continuous optimization and support, proactively identifying potential cost-saving measures and recommending adjustments based on your changing business needs.

With us, you’ll gain transparency into your AWS spend through detailed reports and analytics. This visibility empowers you to make informed decisions and manage your budgets effectively. Choose 2nd Watch for cloud spend optimization with AWS and experience the expertise, solutions, and track record that will help you achieve cost savings while driving innovation and growth.

2nd Watch saves organizations hundreds of thousands of dollars in the cloud every year, and we’d love to help you reallocate your cloud spend toward business innovation. Our experienced cloud experts work with your team to teach cloud optimization strategies that can be carried out independently in the future. As an AWS Premier Partner with 10 years of experience, 2nd Watch advisors know how to maximize your environment within budget so you can grow your business. Contact Us to learn more and get started!

 

 

 

 

 

 


A Short Guide to Understanding Looker Pricing and Capabilities

Navigating the current BI and analytics landscape is often an overwhelming exercise. With buzzwords galore and price points all over the map, finding the right tool for your organization is a common challenge for CIOs and decision-makers. Given the pressure to become a data-driven company, the way business users analyze and interact with their data has lasting effects throughout the organization.

Looker, a recent addition to the Gartner Magic Quadrant, has a pricing model that differs from the per-user or per-server approach. Looker does not advertise their pricing model; instead, they provide a “custom-tailored” model based on a number of factors, including total users, types of users (viewer vs. editor), database connections, and scale of deployment.

Those who have been through the first enterprise BI wave (with tools such as Business Objects and Cognos) will be familiar with this approach, but others who have become accustomed to the SaaS software pricing model of “per user per month” may see an estimate higher than expected – especially when comparing to Power BI at $10/user per month. In this article, we’ll walk you through the reasons why Looker’s pricing is competitive in the market and what it offers that other tools do not.

Semantic and Governance Model

Unlike some of its competitors, Looker is not solely a reporting and dashboarding tool – it also acts as a data catalog across the enterprise. Looker requires users to think about their data and how they want their data defined across the enterprise.

Before you can start developing dashboards and visualizations, your organization must first define a semantic model (an abstraction of the database layer into business-friendly terms) using Looker’s native LookML scripting, which will then translate the business definitions into SQL. Centralizing the definitions of business metrics and models guarantees a single source of truth across departments. This will avoid a scenario where the finance department defines a metric differently than the sales or marketing teams, all while using the same underlying data. A common business model also eliminates the need for users to understand the relationships of tables and columns in the database, allowing for true self-service capabilities.

While it requires more upfront work, you will save yourself future headaches of debating why two different reports have different values or need to define the same business definitions in every dashboard you create.

By putting data governance front and center, your data team can make it easy for business users to create insightful dashboards in a few simple clicks.

Customization and Extensibility

At some point in the lifecycle of your analytics environment, there’s a high likelihood you will need to make some tweaks. Looker, for example, allows you to view and modify the SQL that is generated behind each visualization. While this may sound like a simple feature, a common pain point across analytics teams is trying to validate and tie out aggregations between a dashboard and the underlying database. Access to the underlying SQL not only lets analysts quickly debug a problem but also allows developers to tweak the auto-generated SQL to improve performance and deliver a better experience.

Another common complaint from users is the speed for IT to integrate data into the data warehouse. In the “old world” of Cognos and Business Objects, if your calculations were not defined in the framework model or universe, you would be unable to proceed without IT intervention. In the “new world” of Tableau, the dashboard and visualization are prioritized over the model. Looker brings the two approaches together with derived tables.

If your data warehouse doesn’t directly support a question you need to immediately answer, you can use Looker’s derived tables feature to create your own derived calculations. Derived tables allow you to create new tables that don’t already exist in your database. While it is not recommended to rely on derived tables for long-term analysis, it allows Looker users to immediately get speed-to-insight in parallel with the data development team incorporating it into the enterprise data integration plan.

Collaboration

Looker takes collaboration to a new level as every analyst gets their own sandbox. While this might sound like a recipe for disaster with “too many cooks in the kitchen,” Looker’s centrally defined, version-controlled business logic lives in the software for everyone to use, ensuring consistency across departments. Dashboards can easily be shared with colleagues by simply sending a URL or exporting directly to Google Drive, Dropbox, and S3. You can also send reports as PDFs and even schedule email delivery of dashboards, visualizations, or their underlying raw data in a flat file.

Embedded Analytics

Looker enables collaboration outside of your internal team. Suppliers, partners, and customers can get value out of your data thanks to the modern approach to embedded analytics. Looker makes it easy to embed dashboards, visuals, and interactive analytics to any webpage or portal because it works with your own data warehouse. You don’t have to create a new pipeline or pay for the cost of storing duplicate data in order to take advantage of embedded analytics.

So, is Looker worth the price?

Looker puts data governance front and center, which in itself is a decision your organization needs to make (govern first vs. build first). The addition of a centralized way to govern and manage your models is something that is often included as an additional cost in other tools, increasing the total investment when looking at competitors. If data governance and a centralized source of the truth is a critical feature of your analytics deployment, then the ability to manage this and avoid headaches of multiple versions of the truth makes Looker worth the cost.

If you’re interested in learning more or would like to see Looker in action, 2nd Watch has a full team of data consultants with experience and certifications in a number of BI platforms as well as a thorough understanding of how these tools can fit your unique needs. Get started with our data visualization starter pack.

 


Managed Cloud Services: Optimize, Reduce Costs, and Efficiently Achieve your Business Goals

Cloud adoption is becoming more popular across all industries, as it has proven to be reliable, efficient, and more secure as a software service. As cloud adoption increases, companies are faced with the issue of managing these new environments and their operations, ultimately impacting day-to-day business operations. Not only are IT professionals faced with the challenge of juggling their everyday work activities with managing their company’s cloud platforms but must do so in an timely, cost-efficient manner. Often, this requires hiring and training additional IT people—resources that are getting more and more difficult to find.

This is where a managed cloud service provider, like 2nd Watch, comes in.

What is a Managed Cloud Service Provider?

Managing your cloud operations on your own can seem like a daunting, tedious task that distracts from strategic business goals. A cloud managed service provider (MSP) monitors and maintains your cloud environments relieving IT from the day-to-day cloud operations, ensuring your business operates efficiently. This is not to say IT professionals are incapable of performing these responsibilities, but rather, outsourcing allows the IT professionals within your company to concentrate on the strategic operations of the business. In other words, you do what you do best, and the service provider takes care of the rest.

The alternative to an MSP is hiring and developing within your company the expertise necessary to keep up with the rapidly evolving cloud environment and cloud native technologies. Doing it yourself factors in a hiring process, training, and payroll costs.

While possible, maintaining your cloud environments internally might not be the most feasible option in the long run. Additionally, a private cloud environment can be costly and requires your applications are handled internally. Migrating to the public cloud or adopting hybrid cloud model allows companies flexibility, as they allow a service provider either partial or full control of their network infrastructure.

What are Managed Cloud Services?

Managed cloud services are the IT functions you give your service provider to handle, while still allowing you to handle the functions you want. Some examples of the management that service providers offer include:

  • Managed cloud database: A managed database puts some of your company’s most valuable assets and information into the hands of a complete team of experienced Database Administrators (DBAs). DBAs are available 24/7/365 to perform tasks such as database health monitoring, database user management, capacity planning and management, etc.
  • Managed cloud security services: The public cloud has many benefits, but with it also comes security risks. Security management is another important MSP service to consider for your business. A cloud managed service provider can prevent and detect security threats before they occur, while fully optimizing the benefits provided by a cloud environment.
  • Managed cloud optimization: The cloud can be costly, but only as costly as you allow it to be. An MSP can optimize cloud spend through consulting, implementation, tools, reporting services, and remediation.
  • Managed governance & compliance: Without proper governance, your organization can be exposed to security vulnerabilities. Should a disaster occur within your business, such as a cyberattack on a data center, MSPs offer disaster recovery services to minimize recovery downtime and data loss. A managed governance and compliance service with 2nd Watch helps your Chief Security and Compliance Officers maintain visibility and control over your public cloud environment to help achieve on-going, continuous compliance.

At 2nd Watch, our foundational services include a fully managed cloud environment with 24/7/365 support and industry leading SLAs. Our foundational services address the key needs to better manage spend, utilization, and operations.

What are the Benefits of a Cloud Managed Service Provider?

Using a Cloud Managed Service Provider comes with many benefits if you choose the right one.

Some of these benefits include, but are not limited to: 

  • Cost savings: MSPs have experts that know how to efficiently utilize the cloud, so you get the most out of your resources while reducing cloud computing costs.
  • Increased data security: MSPs ensure proper safeguards are utilized while proactively monitoring and preventing potential threats to your security.
  • Increased employee production: With less time spent managing the cloud, your IT managers can focus on the strategic business operations.
  • 24/7/365 management: Not only do MSPs take care of cloud management for you but do so 100% of the time.
  • Overall business improvement: When your cloud infrastructure is managed by a trusted cloud advisor, they can optimize your environments while simultaneously allowing time for you to focus on core business operations. They can also recommend cloud native solutions to further support the business agility required to compete.

Why Our Cloud Management Platform?

With cloud adoption increasing in popularity, choosing a managed cloud service provider to help with this process can be overwhelming. While there are many options, choosing one you can trust is important to the success of your business. 2nd Watch provides multi-cloud management across AWS, Azure, and GCP, and has a special emphasis of putting our customers before the cloud. Additionally, we use industry standard, cloud native tooling to prevent platform lock in.

The solutions we create at 2nd Watch are tailored to your business needs, creating a large and lasting impact on our clients. For example:

  • On average, 2nd Watch saves customers 41% more than if they managed the cloud themselves (based on customer data)
  • Customers experience increased efficiency in launching applications, adding an average 240 hours of productivity per year for your business
  • On average, we save customers 21% more than our competitors

Next Steps

2nd Watch helps customers at every step in their cloud journey, whether that’s cloud adoption or optimizing your current cloud environment to reduce costs. We can effectively manage your cloud, so you don’t have to. Contact us to get the most out of your cloud environment with a managed cloud service provider you can trust.

-Tessa Foley, Marketing


Top 10 Cloud Optimization Best Practices

Cloud optimization is a continuous process specific to a company’s goals, but there are some staple best practices all optimization projects should follow. Here are our top 10.

1. Begin with the end in mind

Business leaders and stakeholders throughout the organization should know exactly what they’re trying to achieve with a cloud optimization project. Additionally, this goal should be revisited on a regular basis to make sure you remain on track to achievement. Create measures to gauge success at different points and follow the agreed upon order of operations to complete the process.

2. Create structure around governance and responsibility

Overprovisioning is one of the most common issues adding unnecessary costs to your bottom line. Implement specific and regulated structure around governance and responsibility for all teams involved in optimization to control any unnecessary provisioning. Check in regularly to make sure teams are following the structure and you only have the tools you need and are actively using.

3. Get all the Data you Need

Cloud optimization is a data-driven exercise. To be successful, you need insight into a range of data pieces. Not only do you need to identify what data you need and be able to get it, but you also need to know what data you’re missing and figure out how to get it. Collaborate with internal teams to make sure essential data isn’t siloed or already being collected. Additionally, regularly clean and validate data to ensure reliability for data-based decision making.

4. Implement Tagging Practices

To best utilize the data you have, organizing and maintaining it with strict tagging practices in necessary. Implement a system that works from more than just a technical standpoint. You can also use tagging to launch instances, control your auto parking methodology, or in scheduling. Tagging helps you understand the data and see what is driving spend. Whether it’s an environment tag, owner tag, or application tag, tagging provides clarity into spend, which is the         key to optimization.

5. Gain Visibility into Spend

Tagging is one way to see where your spend is going, but it’s not the only way required. Manage accounts regularly to make sure inactive accounts aren’t continuing to be billed. Set up an internal mechanism to review with your app teams and hold them accountable. It can be as simple as a dashboard with tagging grading, as long as it lets the data speak for itself.

6. Hire the Right Technical Expertise

Get more out of your optimization with the right technical expertise on your internal team. Savvy technicians should work alongside the business teams to drive the goals of optimization throughout the process. Without collaboration between these departments, you risk moving in differing directions with multiple end goals in mind. For example, one team might be acting with performance or a technical aspect in mind without realizing the implication on optimization. Partnering with optimization experts can also keep teams aligned and moving toward the same goal.

7. Select the Right Tools and Stick with Them

Tools are a part of the optimization process, but they can’t solve problems alone. Additionally, there are an abundance of tools to choose from, many of which have similar functionality and outcomes. Find the right tools for your goals, facilitate adoption, and give them the time and data necessary to produce results. Don’t get distracted by every new, shiny tool available and the “tool champions” fighting for one over another. Avoid the costs of overprovisioning by checking usage regularly and maintaining the governance structure established throughout your teams.

8. Make sure your Tools are Working.

Never assume a tool or a process you’ve put in place is working. In fact, it’s better to assume it’s not working and consistently check its efficiency. This regular practice of confirming the tools you have are both useful and being used will help you avoid overprovisioning and unnecessary spending. For tools to be effective and serve their purpose, you need enough visibility to determine how the tool is contributing to your overall end goal.

9. Empower Someone to Drive the Process.

The number one call to action for anyone diving into optimization is to appoint a leader. Without someone specific, qualified, and active in managing the project with each stakeholder and team involved, you won’t accomplish your goals. Empower this leader internally to gain the respect and attention necessary for employees to understand the importance of continuous optimization and contribute on their part.

10. Partner with Experts.

Finding the right partner to help you optimize efficiently and effectively will make the process easier at every turn. Bringing in an external driver who has the know-how and experience to consult on strategy through implementation, management, and replication is a smart move with fast results.

2nd Watch takes a holistic approach to cloud optimization with a team of experienced data scientists and architects who help you maximize performance and returns on your cloud assets. Are you ready to start saving? Let us help you define your optimization strategy to meet your business needs and maximize your results. Contact Us to take the next step in your cloud journey.

-Willy Sennott, Optimization Practice Manager


Cloud Optimization: Top 5 Challenges and Why Tools Can’t Solve Them

Optimizing your cloud is essential for maximizing budgets, centralizing business units, making informed decisions, and driving performance. Regardless of whether you’re already in the cloud or you’re just beginning to consider migrating, you need to be aware of the challenges to optimization in order to avoid or overcome them and reach your optimization goals.

1. Complexity

The most pervasive challenge of optimization in the cloud is the complexity of the task. Regardless of the cloud platform – AWS, Azure, Google Cloud, or a hybrid cloud strategy – the intricacies are constantly evolving and changing. Trying to stay on top of that as an individual business requires a good amount of time, resources, and effort. Adding new tools and processes to your cloud requires integration, stakeholder agreement, data mining, analysis, and maintenance. While the potential outcomes from optimization are business-changing, it’s an ongoing process with many moving parts.

2. Governance

Standardized governance frameworks bring decentralized business units and disparate stakeholders together to accomplish business-wide objectives. Shared responsibility, from central IT to individual app teams, prevents the costly consequences of overprovisioning.  While many organizations are knowingly overprovisioned, they can’t seem to solve the problem. Part of the issue is simply a lack of overall governance.

3. Data

Cloud optimization is a data driven exercise. If it’s not data driven, it’s not scalable. You need to maximize your data by knowing what data you have, where it is, and how to access it. Also important is knowing what data is missing. Many organizations believe they have complete metrics, but they’re not capturing and monitoring memory, which is a huge piece of the puzzle. In fact, memory is one of the most constrained points of data across organizations.

4. Visibility

Incredibly important within data discovery and data mapping is gaining visibility through tagging. Without an enforced and uniform tagging strategy as part of your governance structure, spend can increase without accounting for it. Tags provide insight into your cloud economics, letting you know who is spending what, what are they spending it on, and how much are they spending. It’s not uncommon to see larger organizations with a number of individual linked accounts and no one knows who they belong to. We’ve even found, after some digging, that the owners of those accounts haven’t been with the company for months! To get the cost saving benefits from cloud optimization, you need visibility throughout the process.

5. Technical expertise

You need a certain level of technical expertise and intuition to take advantage of all the ways you can optimize your cloud. Too often, techs aren’t necessarily thinking about optimization, but rather make decisions based on other performance or technical aspects. Without optimization at the forefront of these deterministic behaviors, the business drivers may not perform as expected. Partner with data scientists and architects to map connections between data, workloads, resources, financial mechanisms, and your cloud optimization goals.

Tools are part of the solution, but not the entire solution.

While tools can help with your cloud optimization process, they can’t solve these common challenges alone. Tools just don’t have the capability to solve your data gaps. In fact, one foundational issue with tools is the specific algorithms used to generate recommendations. Regardless of whether or not the tool has complete data, it will still make the same recommendations, thereby creating confusion and introducing new risks.

It takes work to get the best results. Someone has to first be able to deduce the information provided by your tools, then put it into context for the various decision makers and stakeholders, and finally, your application owners and businesses teams have to architect the optimization correctly to be able to take advantage of the savings.

In choosing the right tools to aid your optimization, be aware of ‘tool champions’ who create internal noise around decision making. New tools are launched almost daily, and different stakeholders are going to champion different tools.

Once you find a tool, stick with it. Give it a chance to fully integrate with your cloud, provide training, and encourage adoption for best results. The longer it’s a part of your infrastructure, the more it will be able to aid in optimization.

2nd Watch takes a holistic approach to cloud optimization from strategy and planning, to cost optimization, forecasting, modeling and analytics. Download our eBook to learn more about adopting a holistic approach to cloud cost optimization.

-Willy Sennott, Optimization Practice Manager


5 Steps to Cloud Cost Optimization: Hurdles to Optimization are Organizational, Not Technical

[et_pb_section bb_built=”1″ admin_label=”section” _builder_version=”3.22.3″][et_pb_row admin_label=”row” _builder_version=”3.22.3″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.0.74″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”]

In my last blog post, I covered the basics of cloud cost optimization using the Six Pillars model, and focused on the ‘hows’ of optimization and the ‘whys’ of its importance. In this blog, I’d like to talk about what comes next: preparing your organization for your optimization project. The main reason most clients delay and/or avoid confronting issues regarding cloud optimization is because it’s incredibly complex. Challenges from cloud sprawl to misaligned corporate priorities can cause a project to come to a screeching halt. Understanding the challenges before you begin is essential to getting off on the right foot.

 

5 Main Cloud Cost Optimization Challenges

Here are the 5 main challenges we’ve seen when implementing a cloud cost optimization project:

  • Cloud sprawl refers to the unrestricted, unregulated creation and use of cloud resources; cloud cost sprawl, therefore, refers to the costs incurred related to the use of each and every cloud resource (i.e., storage, instances, data transfer, etc.). This typically presents as decentralized account or subscription management.
  • Billing complexity, in this case, specifically refers to the ever-changing and variable billing practices of cloud providers and the invoices they provide you. Considering all possible variable configurations when creating many solutions across an organization, Amazon Web Services (AWS) alone has 500,000 plus SKUs you could see on any single invoice. If you cannot make sense of your bill up front, your cost optimization efforts will languish.
  • Lack of Access to Data and Application Metrics is one of the biggest barriers to entry. Cost optimization is a data driven exercise. Without billing data and application metrics over time, many incorrect assumptions end up being made resulting in higher cost.
  • Misaligned policies and methods can be the obstacle that will make or break your optimization project. When every team, organization or department has their own method for managing cloud resources and spend, the solution becomes more organizational change and less technology implementation. This can be difficult to get a handle on, especially if the teams aren’t on the same page with needing to optimize.
  • A lack of incentives may seem surprising to many, after all who doesn’t want to save money, however it is the number one blocker in large enterprises that we have experienced toward achieving optimization end goals. Central IT is laser focused on cost management and application/business units are focused more on speed and innovation. Both goals are important, but without the right incentives, process, and communication this fails every time. Building executive support to directly reapply realized optimization savings back to the business units to increase their application and innovation budgets is the only way to bridge misaligned priorities and build the foundation for lasting optimization motivation.

According to many cloud software vendors, waste accounts for 30% to 40% of all cloud usage. In the RightScale State of the Cloud Report 2019, a survey revealed that 35% of cloud spend is wasted. 2nd Watch has found that within large enterprise companies, there can be up to 70% savings through a combination of software and services.  It often starts by just implementing a solid cost optimization methodology.

When working on a project for cloud cost optimization, it’s essential to first get the key stakeholders of an organization to agree to the benefits of optimizing your cloud spend. Once the executive team is onboard and an owner is assigned, the path to optimization is clear covering each of the 6 Pillars of Optimization.

Path to Cloud Optimization

Step One: Scope and Objectives

As with any project, you first want to identify the goals and scope and then uncover the current state environment. Here are a few questions to ask to scope out your work:

  • Overall Project Goal – Are you focused on cost savings, workload optimization, uptime, performance or a combination of these factors?
  • Budget – Do you want to sync to a fiscal budget? What is the cycle? What budget do you have for upfront payments? Do you budget at an account level or organization level?
  • Current State – What number of instances and accounts do you have? What types of agreements do you have with your cloud provider(s)?
  • Growth – Do you grow seasonally, or do you have planned growth based on projects? Do you anticipate existing workloads to grow or shrink overtime?
  • Measurement – How do you currently view your cloud bill? Do you have detailed billing enabled? Do you have performance metrics over time for your applications?
  • Support – Do you have owners for each application? Are people available to assess each app? Are you able to shutdown apps during off hours? Do you have resources to modernize applications?

Step Two: Data Access

One of the big barriers to a true optimization is gaining access to data. In order to gather the data (step 3) you first need to get the team onboard to grant you or the optimization project team access to the information.

During this step, get your cross-functional team excited about the project, share the goals and current state info you gathered in the previous step and present your strategy to all your stakeholders.

Stakeholders may include application owners, cloud account owners, IT Ops, IT security and/or developers who will have to make changes to applications.

Remember, data is key here, so find the people who own the data. Those who are monitoring applications or own the accounts are the typical stakeholders to involve. Then share with them the goals and bring them along this journey.

Step Three: Data Management

Data is grouped into a few buckets:

  1. Billing Data – Get a clear view of your cloud bill over time.
  2. Metrics Data – CPU, I/O, Bandwidth and Memory for each application over time is essential.
  3. Application Data – Conduct interviews of application owners to understand the nuances. Graph out risk tolerance, growth potential, budget constraints and identify the current tagging strategy.

A month’s worth of data is good, though three months of data is much better to understand the capacity variances for applications and how to project into the future.

Step Four: – Visualize and Assess Data Usage

This step takes a bit of skill. There are tools like CloudHealth that can help you understand your cost and usage in cloud. Then there are other tools that can help you understand your application performance over time. Using the data from each of these sources and collaborating them across the pillars of optimization is essential to understanding where you can find the optimal cost savings.

I often recommend bringing in an optimization expert for this step. Someone with a data science, cloud and accounting background can help you visualize data and find the best options for optimization.

Step Five: Remediation Plan

Now that you know where you can save, take that information and build out a remediation plan. This should include addressing workloads in one or more of the pillars.

For example, you may shut down resources at night for an application and move it to another family of instances/VMs based on current pricing.

Your remediation should include changes by application as well as:

  1. RI Purchase Strategy across the business on a 1 or 3-year plan.
  2. Auto-Parking Implementation to part your resources when they’re not in use.
  3. Right-Sizing based on CPU, memory, I/O.
  4. Family Refresh or movement to the newer, more cost-effective instance families or VM-series.
  5. Elimination of Waste like unutilized instances, unattached volumes, idle load balancers, etc.
  6. Storage reassessment based on size, data transfer, retrieval time and number of retrieval requests.
  7. Tagging Strategy to track each instance/VM and track it back to the right resources.
  8. IT Chargeback process and systems to manage the process.

Remediation can take anywhere from one month to a year’s time based on organization size and the support of application teams to make necessary changes.

Download our ‘5 Steps to Cloud Cost Optimization’ infographic for a summary of this process.

End Result

With as much as 70% savings possible after implementing one of these projects, you can see the compelling reason to start. A big part of the benefits is organizational and long lasting including:

  • Visibility to make the right cloud spending decisions​
  • Break-down of your cloud costs by business area for chargeback or showback​
  • Control of cloud costs while maintaining or increasing application performance​
  • Improved organizational standards to keep optimizing costs over time​
  • Identification of short and long-term cost savings across the various optimization pillars:

Many companies reallocate the savings to innovative projects to help their company grow. The outcome of a well-managed cloud cost optimization project can propel your organization into a focus on cloud-native architecture and application refactoring.

Though complex, cloud cost optimization is an achievable goal. By cross-referencing the 6 pillars of optimization with your organizations policies, applications and teams, you can quickly find savings from 30 – 40% and grow from there.

By addressing project risks like lack of awareness, decentralized account management, lack of access to data and metrics, and lack of clear goals, your team can quickly achieve savings.

Ready to get started with your cloud cost optimization? Schedule a Cloud Cost Optimization Discovery Session for a free 2-hour session with our team of experts.

-Stefana Muller, Sr Product Manager

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]


The 6 Pillars of Cloud Cost Optimization

Let me start by painting the picture: You’re the CFO. Or the manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources. Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.

Except you.

The promise of moving to cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.

Source: Amazon Web Services (AWS). “Understanding Consolidated Bills – AWS Billing and Cost Management”. (2017). Retrieved from https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/con-bill-blended-rates.html

From Reserved Instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimize your spend.

It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.

Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimization. This where 2nd Watch and our Six Pillars of Cloud Optimization come in.

The Six Pillars of Cloud Cost Optimization

1. Reserved Instances (RIs)

AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.

Most cloud cost optimizations, erroneously, begin and end here—providing you and your organization with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through 3rd party optimization tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.

Two of the major factors to consider with cloud cost optimization are Risk Tolerance and Centralized RI Management portfolios.

  • Risk Tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organization take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings.
  • Centralized RI Management portfolios allow for deeper RI coverage across organizational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralized, whole organization approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program.

Once you identify your risk tolerance and centralize your approach to RI’s you can take advantage of this optimization option. Though, an RI-only optimization strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the 5 other optimization pillars to achieve the most effective cloud cost optimization.

2. Auto-Parking

One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.

3. Right-Sizing

Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyze resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.

Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.

Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimization pillars?

4. Family Refresh

Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.

Up-to-date knowledge of the instance types/families being used within your organization is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.

Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimization. As a result, for many organizations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimization service offering.

5. Waste

Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilized instances. Some examples of waste in the cloud include:

  • AWS RDSs or Azure SQL DBs without a connection
  • Unutilized AWS EC2s
  • Azure VMs that were spun up for training or testing
  • Dated snapshots that are holding storage space that will never be useful
  • Idle load balancers
  • Unattached volumes

Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organizational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.

6. Storage

Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:

  • Size – How much storage do you need?
  • Data Transfer (bandwidth) – How often does your data need to move from one location to another?
  • Retrieval Time – How quickly do you need to access your data?
  • Retrieval Requests – How often do you need to access your data?

There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.

So, you see, each of these six pillars of cloud cost optimization houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimizing only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimization process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.

The good news is that the future is bright! Once you have completely assessed your current environment, taken the pillars into account, made the changes required to optimize your cloud, and found a method by which to make this process continuous, you can investigate optimization through application refactoring, ephemeral instances, spot instances and serverless architecture.

The promised cost savings of public cloud is reachable, if only you know where to look.

2nd Watch offers a Cloud Cost Optimization service that can help guide you through this process. Our Cloud Cost Optimization service is guaranteed to reduce your cloud computing costs by 20%,* increasing efficiency and performance. Our proven methodology empowers you to make data driven decisions in context, not relying on tools alone. Cloud cost optimization doesn’t have to be time consuming and challenging. Start your cloud cost optimization plan with our proven method for success at https://offers.2ndwatch.com/download-cloud-cost-optimization-datasheet

*To qualify for guaranteed 20% savings, must have at least $50,000/month cloud usage.

Stefana Muller, Sr. Product Manager


How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible

 

In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.

 

-Kris Bliesner, CTO

 

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.


Cloud Cost Complexity: Bringing the unknown unknowns to light

When first speaking to mid-size and large enterprises considering embracing the Amazon Web Services (AWS) cloud, the same themes come up consistently.  Sometimes it comes out explicitly and sometimes it is just implied, but one item that nearly all are apprehensive about is their discomfort with “unknown unknowns” (the stuff you don’t even know that you don’t know). They recognize that AWS represents a paradigm shift in how IT services are provisioned, operated, and paid for, but they don’t know where that shift might trip them up or where it will create gaps in their existing processes.  This is a great reason to work with an AWS Premier Partner, but that is a story for another day.

Let’s talk about one of the truly unknown unknowns – AWS Cost Accounting.  The pricing for Amazon Web Services is incredibly transparent.  The price for each service is clearly labeled online and publicly available.  Amazon’s list prices are the same for all customers, and the only discounts come in the form of volume discounts based on usage, or Reserved Instances (RIs).  So if all of this is so transparent, how can this be an unknown unknown?  The devil is in the details.

The scenario nearly always plays out the same way.  An enterprise starts by dipping a toe into the AWS waters.  It starts with one account, then two or three. Six months later they have 10 or 20 AWS accounts.  This is a good thing. AWS is designed to be easy to consume – Nothing more than a credit card is required to get started.  The challenge comes when your organization moves to consolidated invoice billing.  Your organization may be doing this because you want central procurement to manage the payments, you want to pool your volume for discounts, or it may be as simple as wanting it off your credit card. Either way, you now have an AWS bill that might not be what was expected (the unknown unknown).

If you have ever seen an AWS bill, you know they contain a phenomenal amount of useful information.  Amazon provides a spreadsheet monthly with every line item that was billed for the period with amazing detail and precision.  The down side of this wealth of information is that once you start accumulating several AWS accounts on the same consolidated bill, the bill becomes exponentially more difficult to rationalize and track your costs.

In contrast to the unknown unknown, the ability to accurately attribute per-workload costs is one of AWS’ best features and a strong attractor to AWS.  For many organizations, the ability to provide showback or chargeback bills to business units is extraordinarily valuable.  Once a business unit can see the direct costs of their IT resources they can make more informed business decisions.  It is amazing how often HA and DR requirements get adjusted when a business unit can calculate the cost / benefit of each option.

Along with the apprehension of unknown unknowns, many organizations are both excited and a little scared of going to a truly variable cost model.  They are used to knowing what their costs are (even if they are over provisioned).  The idea that they won’t know what the workload will cost until it is up and running on AWS can be a scary one.  This fear can be flipped into a virtue – try it!  Run a quick POC and the workload for performance, cost etc.  See if it works for your use case.  If it does, great; if not, it didn’t cost much to find out.

Managing your costs in AWS means more than just deciphering your bill this month.  It also means the ability to track historical spend by service and interpret the results.  Business units need to understand why their portion of the bill is going up or down and what is driving the change.

The solution to the cost accounting challenge is to use a cost accounting tool specific to AWS.  As Amazon is quick to point out, the pricing model, while transparent, is also fluid.  They have dropped pricing on various services more than 50 times in the last few years.  To effectively manage AWS costs, users want a comprehensive solution that can take a consolidated bill and make it easy to generate real insights.  Most on-premise or co-located solutions cannot match the granularity and accuracy of AWS with a properly implemented cost accounting tool.  With the right tool you can take one of the unknown unknowns and make it a powerful advantage for your journey to the public cloud!

2nd Watch offers software and services that simplify your cloud billing as part of our Managed Billing solution.  This solution expands upon our industry-leading cloud accounting platform with a trained concierge to help facilitate billing questions, making analyzing, budgeting, tracking, forecasting and invoicing the cost of the public cloud easier. Our Managed Billing Service lets you accurately allocate deployment expenses to your financial reporting structure and provides business insights through detailed usage analytics and budget reporting. We offer these services for free to our Managed Services customers.  Find out more at www.2ndwatch.com/Managed-Cloud.

-By Marc Kagan, Managed Cloud Specialist


Introducing the Scheduled Reserved Instance

Amazon Web Services will continue to lower its prices for IaaS (Infrastructure as a Service) and PaaS (Platforms as a Service) for a number of different reasons. But that doesn’t mean that your public cloud costs will go down over time. Over the past 2 years I’ve seen SMB’s and Enterprise firms surprised by rising cloud costs despite the falling rates. How does this happen? And how can your business get ahead of the problem?

How AWS can lower its rates over and over again

First is the concept of capacity planning, which is much different in the public cloud when compared to the traditional days of voice and data infrastructure. In the “ole days” we used the 40-60-80 rule. Due to the lengthy lead times to order circuits, rack equipment, run cables and go-live, enterprise IT organizations used 40-60-80 as triggers for when to act on new capacity building activities.

At 40% utilization, the business would begin planning for capacity expansion. At 60% utilization, new capacity would be ordered. At 80% utilization, the new capacity would be turned up and ready for go-live. All this time, IT planners would run around from Business Unit to Business Unit trying to gather their usage forecasts and growth plans for the next 12-24 months. It was a never ending cycle. Wow – that was exhausting!

Second is the well-known concept of Economies of Scale, which affords AWS cost advantages due to the sheer size, scale and output of its operations globally. Simply put, more customers will lead to more usage, and Amazon’s fixed costs will be spread over more customers. As a result, the cost per unit (EC2 usage hour, Mbps of Data Transfer, or Gigabyte of S3 storage) will decrease. A lower cost per unit allows Amazon to safely lower its prices and lead the market in public cloud adoption.

In the public cloud world, Amazon can gauge customer commitment, capacity planning and growth estimates by offering reservations for its infrastructure – otherwise known as Reserved Instances. Historically Reserved Instances come in six different types – No Upfront, Partial Upfront and Full Upfront (referring to the initial down payment amount) and offered in a 1-year or 3-year commitment.

No Upfront RI’s have the lowest discount factor over the commitment term, and Full Upfront RI’s have the highest discount factor. With the help of Reserved Instances, AWS has been able to plan its capacity in each region by offering customers a discount for their extended commitment. Genius!

New Reserved Instances

But it gets better. In January, AWS released a new type of Reserved Instance that gives the customer more time control and also provides Amazon with more insight into what time of day the AWS resource will be used. Why is this new “Scheduled Reserved Instance” important?

Well, a traditional RI is most effective when the instance runs all day and all year. There is a breakeven point for each RI type, but for simplicity let’s assume that the resource should be always-on to achieve the maximum savings.

However a Scheduled Reserved Instance allows the customer to designate which hours of which day the resource will run. Common use cases include month-end reporting, daily financial risk calculations, nightly genome sequencing, or any regularly scheduled batch job.

Standard Reserved Instances

Before the Scheduled RI, the customer had 3 options – (1) run the compute on-demand and pay the highest price, (2) reserve the compute with a Standard Reserved Instances (SRI) and waste the time when the job’s not running (known as spoilage), or (3) try to run it on Spot Instances and hope their bid is met with available capacity.

Now there’s a fourth option – The Scheduled Reserved Instance. Savings are lower, typically in the 5-10% range compared to on-demand rates, but the customer has incredible flexibility in scheduling hours and recurrence. Oh yeah – and did I mention that AWS can now sell even more excess capacity at a discount?

With so many options available to AWS customers, it’s important to find an AWS Premier Partner that can analyze each cloud workload and recommend the right mix of cost-reducing techniques. Whether the application usage pattern is steady state, spiky predictable, or uncertain-unpredictable, there is a combination of AWS solutions designed to save money and still maintain performance. Contact 2nd Watch today to learn more about Cloud Cost Optimization Reports.

Zach Bonugli, Managed Cloud Specialist