AWS regularly cuts customer cost by reducing the price of their services. This happened most recently with the price reduction of C4, M4 and R3 instances. These instances saw a 5% price cut when running on Linux. This was their 51st price reduction. Customers are clearly benefiting from the scale that AWS can bring to the market. Spot Instances and Reserved Instances are another way customers can significantly reduce the cost to run their workloads in the cloud.
Sometimes these cost savings are not as obvious, but they need to be understood and measured when doing a TCO calculation. AWS recently announced Certificate Manager. Certificate Manager allows you to request new SSL/TLS certificates and then manage them with automated renewals. The best part is that the service is free! Many vendors charge hundreds of dollars for new certificates, and AWS is now offering it for free. The automated renewal could also save you time and money while preventing costly outages. Just ask the folks over at Microsoft how costly a certificate expiring can be.
Another way AWS reduces the cost to manage workloads is by offering new features in an existing service. S3 Standard – Infrequent Access is an example of this. AWS offered the same eleven 9s of durability while reducing availability from four 9s to three. Customers who are comfortable going from 52 minutes of downtime a year to 8.5 hours of downtime per year for objects that don’t need the same level of availability can save well over 50%, even at the highest usage levels. When you add features like encryption, versioning, cross-region replications and others, you start to see the true value. Building and configuring these features yourself in a private cloud or in your own infrastructure can be costly add-ons. AWS often offers these add-ons for free or only charges for the associated use, like the storage cost for cross-region replication.
Look beyond CPUs, memory, and bytes on disk when calculating the savings you will get with a move to AWS. Explore the features and services you cannot offer your business from within your own datacenter or colocation facility. Find a partner like 2nd Watch to help you manage and optimize your cloud infrastructure for long-term savings.
-Chris Nolan, Director of Product
Amazon Web Services will continue to lower its prices for IaaS (Infrastructure as a Service) and PaaS (Platforms as a Service) for a number of different reasons. But that doesn’t mean that your public cloud costs will go down over time. Over the past 2 years I’ve seen SMB’s and Enterprise firms surprised by rising cloud costs despite the falling rates. How does this happen? And how can your business get ahead of the problem?
Let’s examine how AWS can lower its rates over and over again.
First is the concept of capacity planning, which is much different in the public cloud when compared to the traditional days of voice and data infrastructure. In the “ole days” we used the 40-60-80 rule. Due to the lengthy lead times to order circuits, rack equipment, run cables and go-live, enterprise IT organizations used 40-60-80 as triggers for when to act on new capacity building activities. At 40% utilization, the business would begin planning for capacity expansion. At 60% utilization, new capacity would be ordered. At 80% utilization, the new capacity would be turned up and ready for go-live. All this time, IT planners would run around from Business Unit to Business Unit trying to gather their usage forecasts and growth plans for the next 12-24 months. It was a never ending cycle. Wow – that was exhausting!
Second is the well-known concept of Economies of Scale, which affords AWS cost advantages due to the sheer size, scale and output of its operations globally. Simply put, more customers will lead to more usage, and Amazon’s fixed costs will be spread over more customers. As a result, the cost per unit (EC2 usage hour, Mbps of Data Transfer, or Gigabyte of S3 storage) will decrease. A lower cost per unit allows Amazon to safely lower its prices and lead the market in public cloud adoption.
In the public cloud world, Amazon can gauge customer commitment, capacity planning and growth estimates by offering reservations for its infrastructure – otherwise known as Reserved Instances. Historically Reserved Instances come in six different types – No Upfront, Partial Upfront and Full Upfront (referring to the initial down payment amount) and offered in a 1-year or 3-year commitment. No Upfront RI’s have the lowest discount factor over the commitment term, and Full Upfront RI’s have the highest discount factor. With the help of Reserved Instances, AWS has been able to plan its capacity in each region by offering customers a discount for their extended commitment. Genius!
But it gets better. In January, AWS released a new type of Reserved Instance that gives the customer more time control and also provides Amazon with more insight into what time of day the AWS resource will be used. Why is this new “Scheduled Reserved Instance” important?
Well, a traditional RI is most effective when the instance runs all day and all year. There is a breakeven point for each RI type, but for simplicity let’s assume that the resource should be always-on to achieve the maximum savings.
However a Scheduled Reserved Instance allows the customer to designate which hours of which day the resource will run. Common use cases include month-end reporting, daily financial risk calculations, nightly genome sequencing, or any regularly scheduled batch job.
Before the Scheduled RI, the customer had 3 options – (1) run the compute on-demand and pay the highest price, (2) reserve the compute with a Standard RI and waste the time when the job’s not running (known as spoilage), or (3) try to run it on Spot Instances and hope their bid is met with available capacity. Now there’s a fourth option – The Scheduled Reserved Instance. Savings are lower, typically in the 5-10% range compared to on-demand rates, but the customer has incredible flexibility in scheduling hours and recurrence. Oh yeah – and did I mention that AWS can now sell even more excess capacity at a discount?
With so many options available to AWS customers, it’s important to find an AWS Premier Partner that can analyze each cloud workload and recommend the right mix of cost-reducing techniques. Whether the application usage pattern is steady state, spiky predictable, or uncertain-unpredictable, there is a combination of AWS solutions designed to save money and still maintain performance. Contact 2nd Watch today to learn more about Cloud Cost Optimization Reports.
-Zach Bonugli, Managed Cloud Specialist
Cloud billing is often complex and one-dimensional, and allocating costs across your organization – to the right departments and projects – can be difficult and time-consuming. With over 28,000 different ways to buy products and services from AWS, enterprises need sophisticated software and expertise to ensure they are maximizing the use of their AWS resources while optimizing their cloud spend and controlling cloud sprawl.
2nd Watch Managed Billing can help simplify your cloud billing. 2W Managed Billing provides a concierge-level billing service and online billing portal that simplifies analyzing, budgeting, tracking, forecasting and invoicing the cost of the public cloud, giving you an easy-to-understand view into your cloud costs.
Download the 2nd Watch Managed Billing datasheet to learn more about how managed billing can help you gain visibility into and understand your cloud bill. Or sign up for a free trial of 2W Managed Billing Service to start effectively managing your cloud usage and costs across your organization right away.
-Nicole Maus, Marketing Manager
When it comes to managing and monitoring IT spending, the cloud has created a new layer of complexity. Consider the fact that AWS provides as many as 28,000 service offerings, generating up to millions of billing line items each month. This creates budgeting and planning problems for CIOs because there’s no easy way to interpret what percentage of cloud spending is going toward storage, compute, or network, along with specific applications, projects and services. IT departments also need a way to merge cloud costs with on-premise IT costs to see the full picture of infrastructure spending across key categories. As well, with many individuals from different departments procuring their own AWS resources, a company can have dozens of unmanaged and unlinked accounts. This creates gaps in financial tracking and spend management and prevents a company from taking advantage of volume discounts.
IT needs a unified model to categorize cloud and non-cloud costs together, and automation to map line items into the IT cost model each month. To automate the mapping process, 2nd Watch and Apptio have worked together on a mapping table that specifies where each Amazon product fits within a standard cost model. This mapping is now embedded in the Apptio Cost Transparency application, a solution for integrating AWS usage with billing, cost categorization, modeling of total costs including internal labor and self-service analytics. This allows IT organizations to categorize cloud costs into trackable categories such as Cloud Windows in Compute or Cloud Archive in Storage.
Determining the run costs of an application is another goal, and requires mapping cloud resources, such as servers and storage, to individual applications. Many IT organizations have not yet adapted their management processes to track application relationship data for cloud infrastructure. Linked accounts and tagging are two ways to get around these hurdles on AWS.
Many enterprises have several AWS accounts at the team and departmental levels in order to encourage agility, but these unlinked accounts create gaps in cost and operational management. To unify unlinked accounts across an organization, companies can use the Apptio application to link individual accounts into one “master account” paid through an IT cost center. This provides visibility into enterprise spend on AWS yet still maintains business-unit level tracking. It also enables savings with volume discounts, that’s not possible when spending goes across several individual AWS accounts.
AWS tags help group usage and expenses across shared key resources like databases. Tagging helps accomplish the problem of mapping AWS resources back to specific business projects, such as “Marketing Web Staging” and “Marketing Web Production.” Detailed tagging can help answer questions such as, how much of an entire application portfolio is comprised of AWS services or what percentage of which projects are using cloud resources? One thing to keep in mind is that AWS tags are applied only to individual accounts. AWS tagging is ideal for environments where you need to share resources across multiple workloads.
There are some limitations to this manual approach for managing individual accounts and tagging, however – managing numerous logins and passwords, going through the AWS setup process for each individual account, creating and controlling a tagging schema, etc. For a more scalable approach to managing AWS accounts and tagging, consider solutions like our 2W Insight billing application, which enables grouping of tags across AWS accounts and provides tools to track and analyze cloud costs by cost center, business unit department, etc. For more information on 2W Insight, contact us.
To learn more about best practices for managing and tracking cloud spending, download our Analyzing Cloud Costs white paper.
-Jeff Aden, EVP, Marketing & Strategic Business Development
AWS enables enterprises to trade capital expense for variable expense, lower operating costs and increase speed and agility. As enterprises begin to deploy cloud services across their business, it is critical to have a standardized approach to allocate usage costs to the appropriate department or cost center. By tracking costs at the cost center level, Enterprises gain visibility throughout their organization – and specifically who is spending precious IT funds.
To allocate costs, usage must first be grouped. AWS provides two methods to group usage; Resources Tags and AWS accounts. Each method is useful but also comes with downsides.
Using AWS Tagging to group usage
- Grouping by tag enables enterprises to run all of their workloads (applications) in a single AWS account, simplifying management within the AWS console.
- A tagging schema needs to be created, universally deployed and tightly controlled.
- Care has to be taken to ensure all individual AWS resources are tagged properly as any mistake in tagging will cause a resource to be left out of the group and not reported properly.
- Many AWS resources are un-tagable, which will require the creation and maintenance of a separate cost distribution scheme to allocate those costs across the enterprise.
- Reserved Instance (RI) discounted usage pricing cannot be linked to a single tag group and can result in significant costing inaccuracies.
Using Multiple AWS Accounts to group usage
- Using individual AWS accounts for each workload provides the most accurate and detailed reporting of costs and usage.
- By creating a separate AWS account for each workload, enterprises can track all associated costs (including RIs) and allocate them to cost centers, departments and/or business units.
- When using AWS accounts to group usage, each account must be manually set up.
- There is no method of sharing resources, such as databases, with multiple workloads as each workload is located in separate AWS accounts.
Given the challenges of both “account based” and “tag based” grouping, we have found that the tracking methodology needs be aligned to the applications or workloads. For deployments where the resources are 100% dedicated to a specific workload, grouping by AWS accounts is ideal as it is the only way to ensure fully accurate costing. Using AWS tagging should be used when you need to share resources across multiple workloads, however enterprises must note that the costing will not be 100% accurate when using tag groups.
Tracking and Allocating Costs for Workloads with Dedicated Resources
As stated above, workloads that do not need to share resources should be set up in unique AWS accounts. This can be accomplished by establishing individual AWS accounts for each workload and mapping them directly to your enterprise organizational structure. The example below illustrates how a complex enterprise can organize its cloud expenses and provide showback or chargeback reports across the enterprise.
In this example, the enterprise would receive two bills for their cloud usage – Business Unit 1 and Business Unit 2. Under each business unit there are multiple levels of cost centers that roll up to each subsequent higher level – which is typical with many Enterprise organizations. In this example, AWS accounts are created for each project/workload then rolled up to provide consolidated usage information by department and business unit. This type of structure enables:
- The owners at the “resources and workload cost accrual and tracking” levels to track their individual usage by AWS accounts, which captures 100% of the cost associated with each AWS account
- The management of department level to view the consolidated usage for their respective cost centers and workloads
- The management of each business unit to view usage by department and AWS account and receive a bill for its entire consolidated usage
This provides a reliable and accurate methodology to track and allocate cloud usage based on your distinct enterprise organizational structure. It does, however, require a disciplined approach to creating new projects and updating your expense management platform to provide executive-level dashboards and the ability to drill-down to detailed consumption reports by cost center. This enables Enterprise IT to provide executive-level transparency while keeping excessive resource consumption under control and reduce IT costs.
Tracking and Allocating Costs for Workloads with Shared Resources
In many organizations there is a need to share key resources, such as databases, across multiple workloads. In these cases it is a best practice to use AWS tags to group your expenses. This method requires careful set up of resources and the creation of a schema to allocate shared resources and resources that cannot be tagged across the enterprise.
Tagging allows enterprises to assign its own metadata to each tag-able resource. Tags do not have any semantic meaning to AWS resources and are interpreted strictly as a string of characters. Tags are made up of both a “Key” and a “Value”. AWS allows up to 10 Keys for each resource, and each Key can have can have unlimited values enabling very detailed grouping of resources. Tagging should be set up based on the needs of the organization and the AWS architecture design. The image below illustrates how to establish a tagging scheme for a 2-Tier Auto-scalable Web Application.
As the project moves from Web Sandbox to Web Staging to Web Production, you can use tags to track usage. When the application is in the Sandbox all resources are tagged with the key “Web Sandbox” and the appropriate value (Environment, Owner, App and/or IT Tower). When the project moves to “Web Staging” you simply replace the original key and values with the ones associated with the next step in development.
While there is no one-size-fits-all solution to AWS expense management, deploying one or both of these methods can provide you the visibility necessary to successfully meet the tracking and analytical needs of your enterprise.
-Tim Hill, Product Manager
As firms progress through the transition from traditional IT to the AWS Cloud, there is often a moment of fear and anxiety related to managing cost. The integration and planning group has done an excellent job of selecting the right consulting partner. Contracts have been negotiated by legal and procurement. Capital funding has been allocated to cover the cost of application migrations. Designs are accepted and the project manager has laid out the critical path to success. Then at the last hour, just before production launch, the finance team chimes in – “How are we going to account for each application’s monthly usage?”
So much planning and preparation is put into integration, because we’ve gone through this process with each new application. But moving to the public cloud presents a new challenge, one that’s easily tackled with a well-developed model for managing cost in a usage-based environment.
AWS allows us to deploy IaaS (Infrastructure as a Service), and that infrastructure is shared across all of our applications in the cloud. With the proper implementation of AWS Resource Tags, cloud resources can be associated with unique applications, departments, environments, locations and any other category for which cost-reporting is essential.
Firms must have the right dialog in the design process with their cloud technology partner. Here’s an outline of the five phases of the 2nd Watch AWS Tagging Methodology, which has been used to help companies plan for cloud-based financial management:
Phase 1: Ask Critical Questions – Begin by asking Critical Questions that you want answered about utilization, spending and resource management. Consider ongoing projects, production applications, and development environments. Critical Questions can include: Which AWS resources are affecting my overall monthly bill? What is the running cost of my high availability features like warm standby, pilot light or geo-redundancy? How do architectural changes or application upgrades change my monthly usage?
Phase 2: Develop a Tagging Strategy – The Cloud Architect will interpret these questions and develop a tagging strategy to meet your needs. The strategy then turns into a component of the Detailed Design and later implemented during the build project. During this stage it’s important to consider the enforcement of standards within the organization. Configuration drift is when other groups don’t use standardized AWS Resource Tags, or those defined in a naming convention. Later when it’s time for reporting, this will create problems for accounting and finance.
Phase 3: Determine Which AWS Resources Are In Scope – Solicit feedback from your internal accounting department and application owners. Create a list of AWS Resources and applications that need to be accounted for. Refer frequently to AWS online documentation because the list of taggable resource types is updated often.
Phase 4: Define How Chargebacks and Showbacks Will Be Used – Determine who will receive usage-based reports for each application, environment or location. Some firms have adopted a Chargeback model in which the accounting team bills the internal groups who have contributed to the month’s AWS usage. Others have used these reports for Showback only, where the usage & cost data is used for planning, forecasting and event correlation. 2W Insight offers a robust reporting engine to allow 2nd Watch customers the ability to create, schedule and send reports to stakeholders.
Phase 5: Make Regular Adjustments For Optimization – Talk to your Cloud Architect about automation to eliminate configuration drift. Incorporate AWS tagging standards into your cloud formation templates. Regularly review tag keys and values to identify non-standard use cases. And solicit the feedback of your accounting team to ensure the reports are useful and accurate.
Working with an AWS Premier Consulting Partner is critical to designing for best practices like cost management. Challenge your partner and ask for real-world examples of AWS Resource Tagging strategies and cost reports. Planning to manage costs in the cloud is not a step that should be skipped. It’s critical to incorporate your financial reporting objectives into the technical design early, so that they can become established, standardized processes for each new application in the cloud.
For more information, please reach out to Zachary Bonugli email@example.com.
– Zachary Bonugli, Global Account Manager