1-888-317-7920 info@2ndwatch.com

5 Steps to Cloud Cost Optimization: Hurdles to Optimization are Organizational, Not Technical

In my last blog post, I covered the basics of cloud cost optimization using the Six Pillars model, and focused on the ‘hows’ of optimization and the ‘whys’ of its importance. In this blog, I’d like to talk about what comes next: preparing your organization for your optimization project. The main reason most clients delay and/or avoid confronting issues regarding cloud optimization is because it’s incredibly complex. Challenges from cloud sprawl to misaligned corporate priorities can cause a project to come to a screeching halt. Understanding the challenges before you begin is essential to getting off on the right foot. Here are the 5 main challenges we’ve seen when implementing a cloud cost optimization project:

  • Cloud sprawl refers to the unrestricted, unregulated creation and use of cloud resources; cloud cost sprawl, therefore, refers to the costs incurred related to the use of each and every cloud resource (i.e., storage, instances, data transfer, etc.). This typically presents as decentralized account or subscription management.
  • Billing complexity, in this case, specifically refers to the ever-changing and variable billing practices of cloud providers and the invoices they provide you. Considering all possible variable configurations when creating many solutions across an organization, Amazon Web Services (AWS) alone has 500,000 plus SKUs you could see on any single invoice. If you cannot make sense of your bill up front, your cost optimization efforts will languish.
  • Lack of Access to Data and Application Metrics is one of the biggest barriers to entry. Cost optimization is a data driven exercise. Without billing data and application metrics over time, many incorrect assumptions end up being made resulting in higher cost.
  • Misaligned policies and methods can be the obstacle that will make or break your optimization project. When every team, organization or department has their own method for managing cloud resources and spend, the solution becomes more organizational change and less technology implementation. This can be difficult to get a handle on, especially if the teams aren’t on the same page with needing to optimize.
  • A lack of incentives may seem surprising to many, after all who doesn’t want to save money, however it is the number one blocker in large enterprises that we have experienced toward achieving optimization end goals. Central IT is laser focused on cost management and application/business units are focused more on speed and innovation. Both goals are important, but without the right incentives, process, and communication this fails every time. Building executive support to directly reapply realized optimization savings back to the business units to increase their application and innovation budgets is the only way to bridge misaligned priorities and build the foundation for lasting optimization motivation.

According to many cloud software vendors, waste accounts for 30% to 40% of all cloud usage. In the RightScale State of the Cloud Report 2019, a survey revealed that 35% of cloud spend is wasted. 2nd Watch has found that within large enterprise companies, there can be up to 70% savings through a combination of software and services.  It often starts by just implementing a solid cost optimization methodology.

When working on a project for cloud cost optimization, it’s essential to first get the key stakeholders of an organization to agree to the benefits of optimizing your cloud spend. Once the executive team is onboard and an owner is assigned, the path to optimization is clear covering each of the 6 Pillars of Optimization.

THE PATH TO OPTIMIZATION

STEP ONE – Scope It Out!

As with any project, you first want to identify the goals and scope and then uncover the current state environment. Here are a few questions to ask to scope out your work:

  • Overall Project Goal – Are you focused on cost savings, workload optimization, uptime, performance or a combination of these factors?
  • Budget – Do you want to sync to a fiscal budget? What is the cycle? What budget do you have for upfront payments? Do you budget at an account level or organization level?
  • Current State – What number of instances and accounts do you have? What types of agreements do you have with your cloud provider(s)?
  • Growth – Do you grow seasonally, or do you have planned growth based on projects? Do you anticipate existing workloads to grow or shrink overtime?
  • Measurement – How do you currently view your cloud bill? Do you have detailed billing enabled? Do you have performance metrics over time for your applications?
  • Support – Do you have owners for each application? Are people available to assess each app? Are you able to shutdown apps during off hours? Do you have resources to modernize applications?

STEP TWO – Get Your Org Excited

One of the big barriers to a true optimization is gaining access to data. In order to gather the data (step 3) you first need to get the team onboard to grant you or the optimization project team access to the information.

During this step, get your cross-functional team excited about the project, share the goals and current state info you gathered in the previous step and present your strategy to all your stakeholders.

Stakeholders may include application owners, cloud account owners, IT Ops, IT security and/or developers who will have to make changes to applications.

Remember, data is key here, so find the people who own the data. Those who are monitoring applications or own the accounts are the typical stakeholders to involve. Then share with them the goals and bring them along this journey.

STEP THREE – Gather Your Data

Data is grouped into a few buckets:

  1. Billing Data – Get a clear view of your cloud bill over time.
  2. Metrics Data – CPU, I/O, Bandwidth and Memory for each application over time is essential.
  3. Application Data – Conduct interviews of application owners to understand the nuances. Graph out risk tolerance, growth potential, budget constraints and identify the current tagging strategy.

A month’s worth of data is good, though three months of data is much better to understand the capacity variances for applications and how to project into the future.

STEP FOUR – Visualize and Assess Your Usage

This step takes a bit of skill. There are tools like CloudHealth that can help you understand your cost and usage in cloud. Then there are other tools that can help you understand your application performance over time. Using the data from each of these sources and collaborating them across the pillars of optimization is essential to understanding where you can find the optimal cost savings.

I often recommend bringing in an optimization expert for this step. Someone with a data science, cloud and accounting background can help you visualize data and find the best options for optimization.

STEP FIVE – Plan Your Remediation Efforts and Get to Work!

Now that you know where you can save, take that information and build out a remediation plan. This should include addressing workloads in one or more of the pillars.

For example, you may shut down resources at night for an application and move it to another family of instances/VMs based on current pricing.

Your remediation should include changes by application as well as:

  1. RI Purchase Strategy across the business on a 1 or 3-year plan.
  2. Auto-Parking Implementation to part your resources when they’re not in use.
  3. Right-Sizing based on CPU, memory, I/O.
  4. Family Refresh or movement to the newer, more cost-effective instance families or VM-series.
  5. Elimination of Waste like unutilized instances, unattached volumes, idle load balancers, etc.
  6. Storage reassessment based on size, data transfer, retrieval time and number of retrieval requests.
  7. Tagging Strategy to track each instance/VM and track it back to the right resources.
  8. IT Chargeback process and systems to manage the process.

Remediation can take anywhere from one month to a year’s time based on organization size and the support of application teams to make necessary changes.

Download our ‘5 Steps to Cloud Cost Optimization’ infographic for a summary of this process.

End Result

With as much as 70% savings possible after implementing one of these projects, you can see the compelling reason to start. A big part of the benefits is organizational and long lasting including:

  • Visibility to make the right cloud spending decisions​
  • Break-down of your cloud costs by business area for chargeback or showback​
  • Control of cloud costs while maintaining or increasing application performance​
  • Improved organizational standards to keep optimizing costs over time​
  • Identification of short and long-term cost savings across the various optimization pillars:

Many companies reallocate the savings to innovative projects to help their company grow. The outcome of a well-managed cloud cost optimization project can propel your organization into a focus on cloud-native architecture and application refactoring.

Though complex, cloud cost optimization is an achievable goal. By cross-referencing the 6 pillars of optimization with your organizations policies, applications and teams, you can quickly find savings from 30 – 40% and grow from there.

By addressing project risks like lack of awareness, decentralized account management, lack of access to data and metrics, and lack of clear goals, your team can quickly achieve savings.

Ready to get started with your cloud cost optimization? Schedule a Cloud Cost Optimization Discovery Session for a free 2-hour session with our team of experts.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

The 6 Pillars of Cloud Cost Optimization

Let me start by painting the picture: You’re the CFO. Or the manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources. Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.

Except you.

The promise of moving to cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.

Source: Amazon Web Services (AWS). “Understanding Consolidated Bills – AWS Billing and Cost Management”. (2017). Retrieved from https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/con-bill-blended-rates.html

From Reserved Instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimize your spend.

It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.

Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimization. This where 2nd Watch and our Six Pillars of Cloud Optimization come in.

The Six Pillars of Cloud Cost Optimization

  1. Reserved Instances (RIs)

AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.

Most cloud cost optimizations, erroneously, begin and end here—providing you and your organization with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through 3rd party optimization tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.

Two of the major factors to consider with cloud cost optimization are Risk Tolerance and Centralized RI Management portfolios.

  • Risk Tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organization take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings.
  • Centralized RI Management portfolios allow for deeper RI coverage across organizational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralized, whole organization approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program.

Once you identify your risk tolerance and centralize your approach to RI’s you can take advantage of this optimization option. Though, an RI-only optimization strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the 5 other optimization pillars to achieve the most effective cloud cost optimization.

  1. Auto-Parking

One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.

  1. Right-Sizing

Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyze resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.

Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.

Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimization pillars?

  1. Family Refresh

Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.

Up-to-date knowledge of the instance types/families being used within your organization is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.

Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimization. As a result, for many organizations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimization service offering.

  1. Waste

Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilized instances. Some examples of waste in the cloud include:

  • AWS RDSs or Azure SQL DBs without a connection
  • Unutilized AWS EC2s
  • Azure VMs that were spun up for training or testing
  • Dated snapshots that are holding storage space that will never be useful
  • Idle load balancers
  • Unattached volumes

Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organizational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.

  1. Storage

Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:

  • Size – How much storage do you need?
  • Data Transfer (bandwidth) – How often does your data need to move from one location to another?
  • Retrieval Time – How quickly do you need to access your data?
  • Retrieval Requests – How often do you need to access your data?

There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.

So, you see, each of these six pillars of cloud cost optimization houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimizing only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimization process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.

The good news is that the future is bright! Once you have completely assessed your current environment, taken the pillars into account, made the changes required to optimize your cloud, and found a method by which to make this process continuous, you can investigate optimization through application refactoring, ephemeral instances, spot instances and serverless architecture.

The promised cost savings of public cloud is reachable, if only you know where to look.

2nd Watch offers a Cloud Cost Optimization service that can help guide you through this process. Our Cloud Cost Optimization service can reduce your current cloud computing costs by as much as 25% to 40%, increasing efficiency and performance. Our proven methodology empowers you to make data driven decisions in context, not relying on tools alone. Cloud cost optimization doesn’t have to be time consuming and challenging. Start your cloud cost optimization plan with our proven method for success at https://offers.2ndwatch.com/optimization.

-Stefana Muller, Sr. Product Manager

Facebooktwitterlinkedinmailrss

Managing Azure Cloud Governance with Resource Policies

I love an all you can eat buffet. One can get a ton of value from a lot to choose from, and you can eat as much as you want or not, for a fixed price.

In the same regards, I love the freedom and vast array of technologies that the cloud allows you. A technological all you can eat buffet, if you will. However, there is no fixed price when it comes to the cloud. You pay for every resource! And as you can imagine, it can become quite costly if you are not mindful.

So, how do organizations govern and ensure that their cloud spend is managed efficiently? Well, in Microsoft’s Azure cloud you can mitigate this issue using Azure resource policies.

Azure resource policies allow you to define what, where or how resources are provisioned, thus allowing an organization to set restrictions and enable some granular control over their cloud spend.

Azure resource policies allow an organization to control things like:

  • Where resources are deployed – Azure has more than 20 regions all over the world. Resource policies can dictate what regions their deployments should remain within.
  • Virtual Machine SKUs – Resource policies can define only the VM sizes that the organization allows.
  • Azure resources – Resource policies can define the specific resources that are within an organization’s supportable technologies and restrict others that are outside the standards. For instance, your organization supports SQL and Oracle databases but not Cosmos or MySQL, resource policies can enforce these standards.
  • OS types – Resource policies can define which OS flavors and versions are deployable in an organization’s environment. No longer support Windows Server 2008, or want to limit the Linux distros to a small handful? Resource policies can assist.

Azure resource policies are applied at the resource group or the subscription level. This allows granular control of the policy assignments. For instance, in a non-prod subscription you may want to allow non-standard and non-supported resources to allow the development teams the ability to test and vet new technologies, without hampering innovation. But in a production environment standards and supportability are of the utmost importance, and deployments should be highly controlled. Policies can also be excluded from a scope. For instance, an application that requires a non-standard resource can be excluded at the resource level from the subscription policy to allow the exception.

A number of pre-defined Azure resource policies are available for your use, including:

  • Allowed locations – Used to enforce geo-location requirements by restricting which regions resources can be deployed in.
  • Allowed virtual machine SKUs – Restricts the virtual machines sizes/ SKUs that can be deployed to a predefined set of SKUs. Useful for controlling costs of virtual machine resources.
  • Enforce tag and its value – Requires resources to be tagged. This is useful for tracking resource costs for purposes of department chargebacks.
  • Not allowed resource types – Identifies resource types that cannot be deployed. For example, you may want to prevent a costly HDInsight cluster deployment if you know your group would never need it.

Azure also allows custom resource policies when you need some restriction not defined in a custom policy. A policy definition is described using JSON and includes a policy rule.

This JSON example denies a storage account from being created without blob encryption being enabled:

{
 
"if": {
 
"allOf": [
 
{
 
"field": "type",
 
"equals": "Microsoft.Storage/ storageAccounts"
 
},
 
{
 
"field": "Microsoft.Storage/ storageAccounts/ enableBlobEncryption",
 
"equals": "false"
 
}
 
]
 
},
 
"then": { "effect": "deny"
 
}
 
}

The use of Azure Resource Policies can go a long way in assisting you to ensure that your organization’s Azure deployments meet your governance and compliance goals. For more information on Azure Resource Policies visit https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For help in getting started with Azure resource policies, contact us.

-David Muxo, Sr Cloud Consultant

Facebooktwitterlinkedinmailrss

2W Insight Cloud Cost Accounting Tool: New Features

Enabling enterprises to accurately distribute cloud expenses to their unique cost reporting structure

Accurate distribution of cloud costs among business units, applications, projects etc. according to accepted accounting practices is one of the grea challenges facing enterprise IT Managers and Financial Accountants today. 2W Insight 7.0 simplifies cloud cost accounting by enabling enterprises to create an organizational hierarchy of cost centers aligned to their reporting structure, where resources are assigned, budgets are managed and financial reports are published.

Organizational Hierarchy

2W Insight 7.0 enables enterprises to create a multi-level organizational structure of cost centers tailored to their financial reporting requirements. Users create cost centers for each project, applications, workload etc., then map them to the financial reporting structure. Once the cost centers and structure are established, users assign cloud resources (including reserved instances) to the cost centers where the costs are incurred.  2W Insight applies AWS pricing rules to the usage within each cost center.  As you move up the hierarchy of cost centers, 2W Insight combines the usage from the linked (lower level) cost centers and re-applies the AWS pricing rules to the combined usage ensuring pricing is accurate, earned tier discounts are applied and reserved instances savings are optimized.

Example Organizational Reporting Structure:

Insight 7.0
Assigning Resources to Cost Centers

2W Insight 7.0 enables enterprises to deploy various strategies when assigning resources to cost centers. Enterprises that have implemented a strategy where each workload is placed in a separate AWS account can assign an account to a cost center.  When assigned, all usage/cost in the AWS account will be included in the cost center.  For enterprises that have implemented a strategy where a single AWS account includes multiple workloads, 2W Insight enables user to filter the resources in one or multiple accounts (by tag, attribute etc.) to locate and assign resources to cost centers. Once assigned, a rule can be added to automatically assign new resources that meet the filter criteria into the cost center. This provides strict governance and control of the resource assignments and provides accurate financial reporting.  It also ensures that the elastic nature of the cloud (resources coming and going based on demand) are aligned to the enterprises cloud cost accounting policies.

Budget Management and Alerting

Once the organizational structure is created and resources have been assigned to cost centers, it is important to manage the budget for each cost center.  2W Insight allow users to set budgets for each cost center and receive notifications when budgets are at risk.  Users can receive alerts if a single day’s usage exceeds a set daily budget threshold (e.g. if a single days cost is 120% of the daily budget), when the MTD cost exceeds a set monthly budget threshold (e.g. if the month-to-date usage reaches 100% of the monthly budget) or when the month-to-date cost exceeds a set month-to-date budget threshold (MTD cost exceeds MTD budget by 10%). Budget management and alerting ensures you know in advance if your costs are at risk of exceeding budget.

Showback reports

2W Insight comes standard with month-end reports for each of your cost centers.  These “Showback” reports detail the costs associated with each of your cost centers by AWS product, and users can be set up to receive the reports at the end of each month. Once users begin receiving these reports, they become more aware and therefore more responsible for their AWS spend.

2W Insight Cloud Cost Accounting tool is provided at no charge to all of our Managed Cloud Services customers. To receive a demonstration of its capabilities and how 2nd Watch helps our clients manage the complexity of the public cloud, please contact us at insight.support@2ndwatch.com.

-Tim Hill, Product/Program Manager

Facebooktwitterlinkedinmailrss

Cloud Cost Complexity: Bringing the unknown unknowns to light

When first speaking to mid-size and large enterprises considering embracing the Amazon Web Services (AWS) cloud, the same themes come up consistently.  Sometimes it comes out explicitly and sometimes it is just implied, but one item that nearly all are apprehensive about is their discomfort with “unknown unknowns” (the stuff you don’t even know that you don’t know). They recognize that AWS represents a paradigm shift in how IT services are provisioned, operated, and paid for, but they don’t know where that shift might trip them up or where it will create gaps in their existing processes.  This is a great reason to work with an AWS Premier Partner, but that is a story for another day.

Let’s talk about one of the truly unknown unknowns – AWS Cost Accounting.  The pricing for Amazon Web Services is incredibly transparent.  The price for each service is clearly labeled online and publicly available.  Amazon’s list prices are the same for all customers, and the only discounts come in the form of volume discounts based on usage, or Reserved Instances (RIs).  So if all of this is so transparent, how can this be an unknown unknown?  The devil is in the details.

The scenario nearly always plays out the same way.  An enterprise starts by dipping a toe into the AWS waters.  It starts with one account, then two or three. Six months later they have 10 or 20 AWS accounts.  This is a good thing. AWS is designed to be easy to consume – Nothing more than a credit card is required to get started.  The challenge comes when your organization moves to consolidated invoice billing.  Your organization may be doing this because you want central procurement to manage the payments, you want to pool your volume for discounts, or it may be as simple as wanting it off your credit card. Either way, you now have an AWS bill that might not be what was expected (the unknown unknown).

If you have ever seen an AWS bill, you know they contain a phenomenal amount of useful information.  Amazon provides a spreadsheet monthly with every line item that was billed for the period with amazing detail and precision.  The down side of this wealth of information is that once you start accumulating several AWS accounts on the same consolidated bill, the bill becomes exponentially more difficult to rationalize and track your costs.

In contrast to the unknown unknown, the ability to accurately attribute per-workload costs is one of AWS’ best features and a strong attractor to AWS.  For many organizations, the ability to provide showback or chargeback bills to business units is extraordinarily valuable.  Once a business unit can see the direct costs of their IT resources they can make more informed business decisions.  It is amazing how often HA and DR requirements get adjusted when a business unit can calculate the cost / benefit of each option.

Along with the apprehension of unknown unknowns, many organizations are both excited and a little scared of going to a truly variable cost model.  They are used to knowing what their costs are (even if they are over provisioned).  The idea that they won’t know what the workload will cost until it is up and running on AWS can be a scary one.  This fear can be flipped into a virtue – try it!  Run a quick POC and the workload for performance, cost etc.  See if it works for your use case.  If it does, great; if not, it didn’t cost much to find out.

Managing your costs in AWS means more than just deciphering your bill this month.  It also means the ability to track historical spend by service and interpret the results.  Business units need to understand why their portion of the bill is going up or down and what is driving the change.

The solution to the cost accounting challenge is to use a cost accounting tool specific to AWS.  As Amazon is quick to point out, the pricing model, while transparent, is also fluid.  They have dropped pricing on various services more than 50 times in the last few years.  To effectively manage AWS costs, users want a comprehensive solution that can take a consolidated bill and make it easy to generate real insights.  Most on-premise or co-located solutions cannot match the granularity and accuracy of AWS with a properly implemented cost accounting tool.  With the right tool you can take one of the unknown unknowns and make it a powerful advantage for your journey to the public cloud!

2nd Watch offers software and services that simplify your cloud billing as part of our Managed Billing solution.  This solution expands upon our industry-leading cloud accounting platform with a trained concierge to help facilitate billing questions, making analyzing, budgeting, tracking, forecasting and invoicing the cost of the public cloud easier. Our Managed Billing Service lets you accurately allocate deployment expenses to your financial reporting structure and provides business insights through detailed usage analytics and budget reporting. We offer these services for free to our Managed Services customers.  Find out more at www.2ndwatch.com/Managed-Cloud.

-By Marc Kagan, Managed Cloud Specialist

Facebooktwitterlinkedinmailrss

Cloud Cost Optimization with AWS

AWS regularly cuts customer cost by reducing the price of their services.  This happened most recently with the price reduction of C4, M4 and R3 instances.  These instances saw a 5% price cut when running on Linux.  This was their 51st price reduction.  Customers are clearly benefiting from the scale that AWS can bring to the market.  Spot Instances and Reserved Instances are another way customers can significantly reduce the cost to run their workloads in the cloud.

Sometimes these cost savings are not as obvious, but they need to be understood and measured when doing a TCO calculation.  AWS recently announced Certificate Manager.  Certificate Manager allows you to request new SSL/TLS certificates and then manage them with automated renewals.  The best part is that the service is free!  Many vendors charge hundreds of dollars for new certificates, and AWS is now offering it for free.  The automated renewal could also save you time and money while preventing costly outages.  Just ask the folks over at Microsoft how costly a certificate expiring can be.

Another way AWS reduces the cost to manage workloads is by offering new features in an existing service.  S3 Standard – Infrequent Access is an example of this.  AWS offered the same eleven 9s of durability while reducing availability from four 9s to three.  Customers who are comfortable going from 52 minutes of downtime a year to 8.5 hours of downtime per year for objects that don’t need the same level of availability can save well over 50%, even at the highest usage levels.  When you add features like encryption, versioning, cross-region replications and others, you start to see the true value.  Building and configuring these features yourself in a private cloud or in your own infrastructure can be costly add-ons.  AWS often offers these add-ons for free or only charges for the associated use, like the storage cost for cross-region replication.

Look beyond CPUs, memory, and bytes on disk when calculating the savings you will get with a move to AWS.  Explore the features and services you cannot offer your business from within your own datacenter or colocation facility.  Find a partner like 2nd Watch to help you manage and optimize your cloud infrastructure for long-term savings.

-Chris Nolan, Director of Product

Facebooktwitterlinkedinmailrss