FinOps Driven Modernization: An Approach for Large Enterprises

Congratulations! You made it to the cloud.

You made a decision and a plan. You selected a migration partner. And you exited your traditional datacenter successfully by migrating thousands of virtual machines to the public cloud. You breathed a huge sigh of relief because it wasn’t easy, but you and your team pulled through.

While you and your teams were preparing to return to focusing on business value-driven tasks and features, the newly minted cloud estate was ticking away like a taxi meter 24 hours a day, seven days a week. The first invoice came, and it seemed a little higher than your forecast. The second monthly invoice was even higher than the first! Your business units (BUs) are now all-in on the cloud, just like you asked, and deploying resources and new environments at will like kids in a candy store. The invoices keep coming, and eventually, Finance takes notice. “What happened here? I thought moving to the cloud would reduce costs?”  If this sounds familiar, you’re not alone.

 

As with many new technologies and strategies, moving to the public cloud comes with risks and rewards. The cloud value proposition is multi-faceted and, according to AWS, includes:

  • Total Cost of Ownership (TCO) Reduction
  • Staff Productivity
  • Operational Resilience
  • Business Agility

For many enterprises, the last three pillars of productivity, resilience, and agility have gotten overshadowed by the promise of a lower TCO. It’s not hard to understand why. Measuring cloud usage costs is easy. The cloud service provider (CSP) does this for you every month. The idea that migrating to the cloud is a cost-driven exercise excludes three-fourths of the potential business value – especially when migrating with a lift-and-shift approach. 

The Lift-and-Shift Approach

When you consider workloads like black boxes, you start your journey without complete visibility into the public cloud’s opportunities. Maybe you had an expiring datacenter contract and had to evacuate under time pressure. That’s understandable. But were you educated and prepared for the tradeoffs of that approach? Or were you shocked by the first invoice and the speed at which the invoices are growing? Did you prepare the CFO in advance and share the next steps? So, what did you miss?  

When you took a black-box lift-and-shift (BBLAS) approach, the focus was on moving virtual machines in groups based on dependency mapping. Your teams or your cloud partner, usually with the help of automation, defined the groups and then worked with you to plan the movement of those groups – typically referred to as “wave planning.” What you ended up with is a mirror image of your datacenter in the public cloud. 

You have now migrated to someone else’s datacenter.

 The old datacenter was predictable where fixed hardware investments dictated capacity, and efforts towards efficiency only occurred when available resources started to dwindle from new and existing services and applications being deployed or scaled. This new datacenter charges by the millisecond, has unlimited capacity, and the investment in additional capacity bypasses procurement and is in the hands of the engineers. Controlling costs in this new datacenter is a whole new world for most enterprises. Enter the FinOps movement.

“FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.” – finops.org

Related: How to Choose the Best Cloud Service Provider for Your Application Modernization Strategy

What is FinOps?

FinOps is to finance and engineering, as DevOps is to development and operations. The FinOps philosophy and approach is how you regain cost control in a BBLAS environment. Before diving into how FinOps can help, let’s look at the Cloud Cost Optimization Cycle (CCOC). The CCOC is a precursor to the FinOps framework and another black-box approach to cost efficiency in the cloud.

A black-box approach is when virtual machines are viewed as a fixed infrastructure without regard to the applications and services running on them. Seasoned professionals have lived through this traditional IT view for years, and it is what separates operations and development concerns. DevOps philosophy is making inroads, of course, but many enterprises have only begun to introduce this philosophy at scale.

 

The Cloud Cost Optimization Cycle goes like this. Every month your CSP makes cost and usage data available. An in-house resource or a consultant analyzes the massive amount of data and prepares recommendations for potential savings. The consultant presents these recommendations to the operations team, which then reconfigures the deployed infrastructure to achieve cloud savings. 

This cycle can produce significant savings at scale and is the traditional starting point for gaining visibility and control over runaway cloud costs. The process follows the FinOps recommended progression of crawl, walk run toward a mature practice. This approach has both benefits and limitations:

 

 

BenefitsLimitations
Infrastructure-focused cost savingsCost savings are limited to infrastructure and cloud configuration changes
Brings financial accountability and cloud spend awareness to the enterpriseApplication architecture remains unchanged
Sets a trajectory towards FinOps best practices (crawl, walk, run)Can create friction between Operations and AppDev teams
Accomplished primarily by the Operations teamApp refactoring focused on patching instead of business value or modernization

The Black-Box Dilemma

In an ideal state, operations teams can iteratively reconfigure public cloud infrastructure based on cost and usage data until the fleet of virtual machines and associated storage are fully optimized. In this ideal state, interactions with the application teams are minimal and driven by the operation team’s needs. The approach ignores the side effects that right-sizing infrastructure can have on somewhat brittle monolithic legacy applications. What usually happens in a BBLAS environment is that the lift-and-shift migration and the subsequent CCOC reveal unforeseen shortcomings in the application architecture, and runtime defects surface. 

CCOC – Mixed Results

A lack of necessary cloud skills and experience on the operations side can exacerbate the issues. For example, if the operations team chooses the incorrect cloud instance type for the workload, applications can become bound by resource constraints. When cloud skill and experience are missing from the application development side, this can cause long delays where defects are difficult for the team to triage and patch. So now, instead of cost optimization efforts gloriously precipitating savings, they are producing a mixture of saving money and addressing issues.

This combination creates an environment where engineering and operations teams begin to collide. The applications were stable in the old datacenter due to factors like:

  • Extremely low network latency between services
  • Applications and databases tuned for the hardware they were running on
  • Debugging and quality processes tuned over the years for efficiency

Now application teams have a new stream of issues entering their backlogs driven by fundamental changes in infrastructure introduced by the noble pursuit of tuning for cost savings. Business value, architectural improvements, and elimination of technical debt are slowed to the point that Application Development leaders start to push back on the CCOC. Operations teams don’t understand why the application is falling apart because the metrics and the cloud cost data they collect support the reduction or reconfiguration of cloud resources. Additional factors are now in play from an application development perspective with a black-box cloud cost optimization strategy:

  • Users are constantly communicating new feature requests to the business
  • Enterprise and Application Architects are pushing teams for modernization
  • Software Team leads are insisting on dedicating capacity to technical debt reduction

Enterprises are struggling to retain developers and are more resource-constrained than ever, causing a general slowdown in time to market for features and architectural improvements when flaws in legacy applications need patching.

 

Going Beyond the Lift-and-Shift

You need a different strategy to overcome these challenges. You must look inside the black boxes to move forward. The CCOC, at its best, will produce a finely tuned version of a legacy application running in the public cloud. You can address the cost pillar of the cloud value framework from an operations perspective, but additional opportunities abound in the form of Application Modernization.

 Enterprises in situations like the one described here need to do two things to move forward on their cloud journeys.

  1. Mature cloud cost optimization towards FinOps
  2. Invest in Application Modernization

These two strategies are complementary and, combined, what 2nd Watch has dubbed “FinOps Driven Modernization.”

The amount of cost and usage data available to enterprises operating in the cloud reveals an opportunity to use that data to drive application modernization strategy at scale across all business units. The biggest challenges in approaching application modernization at scale are:

  • Resource constraints
  • Cloud skills and experience
  • Analysis paralysis – where do we start
  • Calculating Return on Investment

Modernization efforts will be slow and costly without the resource capacity having the necessary cloud architecture and operations skills. They will not produce further buy-in through the socialization of success stories. Getting started seems impossible when an enterprise consists of multiple business units and thousands of virtual machines across hundreds of accounts and development teams. Modernization costs rise dramatically when cloud cost optimization requires changes to the software running on the virtual machines. It can add a significantly higher risk than changing instance families and reconfiguring storage tiers.

How Can FinOps Help Drive Modernization?

 

Let’s look at how maturing FinOps drives modernization opportunities and capacity. We discussed how an infrastructure-focused CCOC could slow down features, business value, and modernization efforts. 

 A potentially significant percentage of the savings realized from this approach will be diverted to triaging and patching application issues.

  • Do the additional development efforts overshadow the infrastructure savings?
  • Is the time to market for new features slowed to the point that the enterprise’s competitive advantage suffers?  

Most enterprises don’t have the processes in place to answer these questions. FinOps Driven Modernization is the answer. With the data from the CSP, the FinOps team can work with the operations and development teams to determine if an optimization recommendation is feasible and valuable to the business.

How does this work at scale among all business units? When you combine cost and usage data with information like:

  • Process information from inside each virtual machine
  • SLA metrics
  • Service ticket and bug metrics
  • Nature of the cloud service
    • IaaS, PaaS, FaaS
    • More on this in the next installment of this series
  • Revenue – unit economics

You begin to see a more holistic view of the cloud estate and can derive insights that include cost and much deeper business intelligence.

Sample FinOps Output from 2nd Watch

Consider being able to visualize where to focus cost optimization and modernization efforts across multiple business units, thousands of virtual machines, and hundreds of applications in a single pane of glass dashboard. The least innovative, noisiest, and most costly areas in your enterprise will begin glowing like hot coals. You can then focus the expenditure of resources, time, and money on high-impact optimization and modernization investments. This reallocation of spending is the power of FinOps Driven Modernization. Finance, operations, engineering, product, and executives are all working together to ensure that the enterprise realizes the actual value of the cloud.

 

Related: How FinOps Can Optimize Cloud Costs and Drive Innovation

A Business Case for FinOps

Let’s dig into a hypothetical business unit struggling with cloud costs. The FinOps team has identified that their per-unit costs exceed the recommended range for their cloud cost-to-revenue ratio. The power of FinOps-Driven Modernization has revealed that the BBLAS approach has resulted in a fleet of virtual machines running commonly modernized workloads like web servers, database servers, file, or image servers, etc. In addition to this IaaS-heavy approach, the BU heavily leverages licensed software and operating systems. This revelation triggers a series of interviews with the BU leadership and application owners to investigate the potential and level of effort to introduce application modernization approaches. The teams within the BU know there is room for improvement but lack the skills and available resources to act. 

They learn through the interview process that they can move licensed databases from virtual machines to a managed cloud platform. Additionally, they discover they could migrate most of the databases to open-source alternatives. Further, they can decommission the cluster of file servers and migrate the data to cloud-native storage with minimal application refactoring. By leveraging the CSP and operational data, a business case for investing in helping the BU make improvements writes itself.  

Without leveraging the FinOps philosophy and extending it with a focus on application modernization, this business unit would have operated for years in a BBLAS state, costing the enterprise orders of magnitude more in cloud spend than the investment in modernization. Extending this approach across the enterprise takes cloud cost management to the next level, resulting in purpose-driven, high-impact progress towards realizing the value of the public cloud.

FinOps is the practice that every enterprise should be adopting to help drive financial awareness throughout the organization. FinOps enables an inclusive and virtuous cycle of continuously improving when leveraged as a driver for application modernization.

Schedule a whiteboard session with our FinOps and Application Modernization experts to discover how 2nd Watch’s approach can help you and your team meet your transformation objectives.

Jesse Samm, Application Modernization Practice Director at 2nd Watch

 


Managed Cloud Services: Optimize, Reduce Costs, and Efficiently Achieve your Business Goals

Cloud adoption is becoming more popular across all industries, as it has proven to be reliable, efficient, and more secure as a software service. As cloud adoption increases, companies are faced with the issue of managing these new environments and their operations, ultimately impacting day-to-day business operations. Not only are IT professionals faced with the challenge of juggling their everyday work activities with managing their company’s cloud platforms but must do so in an timely, cost-efficient manner. Often, this requires hiring and training additional IT people—resources that are getting more and more difficult to find.

This is where a managed cloud service provider, like 2nd Watch, comes in.

What is a Managed Cloud Service Provider?

Managing your cloud operations on your own can seem like a daunting, tedious task that distracts from strategic business goals. A cloud managed service provider (MSP) monitors and maintains your cloud environments relieving IT from the day-to-day cloud operations, ensuring your business operates efficiently. This is not to say IT professionals are incapable of performing these responsibilities, but rather, outsourcing allows the IT professionals within your company to concentrate on the strategic operations of the business. In other words, you do what you do best, and the service provider takes care of the rest.

The alternative to an MSP is hiring and developing within your company the expertise necessary to keep up with the rapidly evolving cloud environment and cloud native technologies. Doing it yourself factors in a hiring process, training, and payroll costs.

While possible, maintaining your cloud environments internally might not be the most feasible option in the long run. Additionally, a private cloud environment can be costly and requires your applications are handled internally. Migrating to the public cloud or adopting hybrid cloud model allows companies flexibility, as they allow a service provider either partial or full control of their network infrastructure.

What are Managed Cloud Services?

Managed cloud services are the IT functions you give your service provider to handle, while still allowing you to handle the functions you want. Some examples of the management that service providers offer include:

  • Managed cloud database: A managed database puts some of your company’s most valuable assets and information into the hands of a complete team of experienced Database Administrators (DBAs). DBAs are available 24/7/365 to perform tasks such as database health monitoring, database user management, capacity planning and management, etc.
  • Managed cloud security services: The public cloud has many benefits, but with it also comes security risks. Security management is another important MSP service to consider for your business. A cloud managed service provider can prevent and detect security threats before they occur, while fully optimizing the benefits provided by a cloud environment.
  • Managed cloud optimization: The cloud can be costly, but only as costly as you allow it to be. An MSP can optimize cloud spend through consulting, implementation, tools, reporting services, and remediation.
  • Managed governance & compliance: Without proper governance, your organization can be exposed to security vulnerabilities. Should a disaster occur within your business, such as a cyberattack on a data center, MSPs offer disaster recovery services to minimize recovery downtime and data loss. A managed governance and compliance service with 2nd Watch helps your Chief Security and Compliance Officers maintain visibility and control over your public cloud environment to help achieve on-going, continuous compliance.

At 2nd Watch, our foundational services include a fully managed cloud environment with 24/7/365 support and industry leading SLAs. Our foundational services address the key needs to better manage spend, utilization, and operations.

What are the Benefits of a Cloud Managed Service Provider?

Using a Cloud Managed Service Provider comes with many benefits if you choose the right one.

Some of these benefits include, but are not limited to: 

  • Cost savings: MSPs have experts that know how to efficiently utilize the cloud, so you get the most out of your resources while reducing cloud computing costs.
  • Increased data security: MSPs ensure proper safeguards are utilized while proactively monitoring and preventing potential threats to your security.
  • Increased employee production: With less time spent managing the cloud, your IT managers can focus on the strategic business operations.
  • 24/7/365 management: Not only do MSPs take care of cloud management for you but do so 100% of the time.
  • Overall business improvement: When your cloud infrastructure is managed by a trusted cloud advisor, they can optimize your environments while simultaneously allowing time for you to focus on core business operations. They can also recommend cloud native solutions to further support the business agility required to compete.

Why Our Cloud Management Platform?

With cloud adoption increasing in popularity, choosing a managed cloud service provider to help with this process can be overwhelming. While there are many options, choosing one you can trust is important to the success of your business. 2nd Watch provides multi-cloud management across AWS, Azure, and GCP, and has a special emphasis of putting our customers before the cloud. Additionally, we use industry standard, cloud native tooling to prevent platform lock in.

The solutions we create at 2nd Watch are tailored to your business needs, creating a large and lasting impact on our clients. For example:

  • On average, 2nd Watch saves customers 41% more than if they managed the cloud themselves (based on customer data)
  • Customers experience increased efficiency in launching applications, adding an average 240 hours of productivity per year for your business
  • On average, we save customers 21% more than our competitors

Next Steps

2nd Watch helps customers at every step in their cloud journey, whether that’s cloud adoption or optimizing your current cloud environment to reduce costs. We can effectively manage your cloud, so you don’t have to. Contact us to get the most out of your cloud environment with a managed cloud service provider you can trust.

-Tessa Foley, Marketing


5 Cloud Optimization Benefits

When making a cloud migration, a common term that gets tossed around is “cloud optimization”. If your organization is new to the cloud, optimizing your environment is essential to ensuring your migration pays off quickly and continues to do so in the long term.

If your organization is already established in the cloud, you may observe higher costs than expected due to cloud sprawl, under-utilized resources, and improper allocation of resources. Cloud optimization helps your organization reduce these costs and improve overall efficiency in the cloud

What is cloud optimization?

The definition of cloud optimization may vary from one cloud service provider to another, but generally, cloud optimization is the process of analyzing, configuring, provisioning, and right-sizing cloud resources to maximize performance and minimize waste for cost efficiency. The reality is that many organizations’ cloud environments are configured in an inefficient manner that creates unnecessary cloud spend. With proper cloud optimization tools and practices, these unnecessary costs can be eliminated.

While cloud optimization is mostly discussed in terms of cloud spend, cost optimization is simply a faucet of cloud optimization and can extend to overall performance and organizational efficiency. Some examples of cloud optimization practices that your organization can adopt right now include:

  • Right-sizing: Matching your cloud computing instance types (i.e. containers and VMs) and sizes with enough resources to sufficiently meet your workload performance and capacity needs to ensure the lowest cost possible.
  • Family Refresh: Replace outdated systems with updated ones to maximize performance.
  • Autoscaling: Scale your resources according to your application demand so you are only paying for what you use.
  • Applying Discounts: Reserved instances (RIs) allow companies to commit to cloud resources for a long period of time. The longer the discount and the more a company is prepared to pre-pay at the beginning of a period, the greater the discount will be. Discounted pricing models like RIs and spot instances will drive down your cloud costs when used according to your workload.
  • Identity use of RIs: Identifying the use of RIs can be an effective way to save money in the cloud if used for suitable loads.
  • Eliminate Waste: Regulating unused resources is a core component of cloud optimization. If you haven’t already considered cloud optimization practices, you are most likely using more resources than necessary or not certain resources to their full capacity.

Why is cloud optimization important?

Overspending in the cloud is a common issue many organizations face by allocating more resources to a workload than necessary. Integrating cloud optimization practices can reap many benefits for your cloud infrastructure and your organization, including the following:

  • Cloud Efficiency: When workload performance, compliance, and cost are continually balanced against the best-fit infrastructure in real-time, efficiency is achieved. Implementing cloud optimization practices will eliminate as much cloud resource waste as possible, increasing the performance of your cloud environment.
  • Cost Savings: Although cloud optimization comes in a variety of forms, cost optimization is the most important component for many organizations. By reducing waste in the cloud, costs are reduced as a byproduct.
  • Greater Visibility: Cloud optimization practices utilize analytics to provide visibility into your cloud environment to make data-driven decisions. Implementing optimization tools also provides cost visibility, so your organization has a better perspective on cloud spend.
  • Increased Productivity: Once a cloud optimization strategy is implemented, IT teams will spend less time trying to solve problems because an optimized environment prevents problems before they occur.
  • Organizational Innovation & Efficiency: Implementing cloud optimization often is accompanied by a cultural shift within organizations such as improved decision-making and collaboration across teams.

What are cloud optimization services?

Public cloud services providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have over 500,000 distinct prices and technical combinations that can overwhelm the most experienced IT organizations and business units. Luckily, there are already services that can help your organization achieve the cloud optimization it needs to drive business outcomes. Cloud optimization services help your organization identify areas of improvement in your cloud for cost savings and efficiency, create an optimization strategy for your organization, and can manage your cloud infrastructure for continuous optimization.

At 2nd Watch, we take a holistic approach to cloud optimization. We have developed various optimization pillars based on real-time data to ensure your cloud environments are running as efficiently as possible. Behind our solutions for cloud optimization is a team of experienced data scientists and architects that help you maximize the performance and returns of your cloud assets. Our services offerings for cloud optimization at 2nd Watch include:

  • Strategy & Planning: Define your optimization strategy with our proven methodology, tailored to meet your desired business outcomes and maximize your results.
  • Cost Optimization Assessment: Gain the visibility necessary to make data-driven decisions. Identify opportunities across our Pillars of Optimization to maximize cost savings and cloud environment efficiency.
  • Spot Instance & Container Optimization: Save up to 90% compared to traditional cloud infrastructure by running both Instances/VMs and Containers on spot resources for relevant workloads.
  • Multi-Cloud Optimization: Cloud optimization on a single public cloud is one challenge but optimizing a hybrid cloud is a whole other challenge. Apply learning from your assessment to optimize your cloud environment for AWS, Microsoft Azure, Google Cloud, and VMware on AWS.
  • Forecasting, Modeling, & Analytics: Understand your past usage, and model and forecast your future needs with the analytical data needed for visibility across your organization.

Our cloud optimization process starts with data, and you have a lot of it. But data alone can lead you astray yielding wasted resources and overspend. There are many other factors to evaluate, such as EDP/EA agreements and Savings Plans/RI Purchases, to ensure you choose the most cost-effective option for your business. Strategically, our data scientists and architects map connections between data and workloads. We then make correlations between how workloads interact with each resource and the optimal financial mechanism to reach your cloud optimization goals.

Cloud Optimization with 2nd Watch

Working with a managed cloud service provider like 2nd Watch will give your organization the expertise needed for cloud optimization. If you want to learn more about cost savings or are interested in fully optimizing your cloud infrastructure, contact us to take your next steps.

 


Optimizing your environment using AWS Savings Plans

Surprisingly, AWS has very quietly released a major enhancement/overhaul to purchasing compute resources up front. To date, purchasing Reserved Instances (Standard or Convertible) has offered AWS users great savings for their static workloads. This works because static workloads tend to utilize a set number of resources and RIs are paid for in advance, thereby justifying the financial commitment.

That said, how often do today’s business needs remain constant, particularly with today’s product development? So, until now, you had two choices if you couldn’t use your RIs: take the loss and let the RI term run out or undertake the hassle of selling it on the marketplace (potentially for a loss). AWS Savings Plans, on the other hand, provide a gigantic leap forward in solving this problem. In fact, you will find that these AWS Savings Plans provide far more flexibility and return for your investment than the standard RI model.

Here is the gist of the AWS Savings Plans program, taken from the AWS site:

AWS Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate usage.

AWS Savings Plans offer significant savings over On Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one- or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer. (Jeff Barr, AWS, 11.06.2019)

This is HUGE for AWS clients, because now, for the first time ever, savings can also be applied to workloads that leverage serverless containers—as well as traditional EC2 instances!

Currently there are two AWS Savings Plans, and here’s how they compare:

EC2 Instance Savings Plan Compute Savings Plan
Offers discount levels up to 72% off on-demand rates (same as RIs). Offers discount levels up to 66% off on-demand rates (the same rate as Convertible RIs).
Any changes in instances are restricted to the same AWS region. Spans regions. This could be a huge draw for companies with need for regional or national coverage.
Restricts EC2 instance types to the same family, but allows change in instance size and OS (e.g., t3.medium to t3.2xlarge). More flexible. Does not limit EC2 instance families or OS, and therefore, you are no longer locked into a specific instance family at the moment of purchase, as you would be with a traditional RI.
EC2 instances only: Similar to convertible RIs, this plan allows you to increase instance size, with a new twist: you can also reduce instance size! Yes, this means you may no longer have to sell your unused RIs on the marketplace! Allows clients to mix-and-match AWS products, such as EC2 and Fargate; extremely beneficial for clients who use a range of environments for their workloads.
BOTTOM LINE: Slightly less flexible, but you garner a greater discount. BOTTOM LINE: More flexible, but with less of a discount.

As with standard RI purchases, understanding your workloads will be key to determining when to use AWS Savings Plans vs. standard RIs (RIs aren’t going anywhere, but we recommend that Savings Plans be used in place of RIs moving forward) vs. On-Demand (including analysis of potential savings from auto-parking, seasonality, elasticity, and so on).

Sound a bit overwhelming? Fear not! This is where 2nd Watch’s Cloud Optimization service excels! Enrollment starts with a full analysis of your organization’s usage, AWS environment, and any other requirements/restrictions your organization may have. The final result is a detailed report, expertly determined by our AWS-certified optimization engineers, with our savings findings and recommendations—customized just for you!

Due to the nature of AWS Savings Plans, they will bring the most immediate value to clients who are either new to AWS or don’t have any RI commitments currently on their account. This is due to the fact that AWS Savings Plans cannot, unfortunately, replace existing RI purchases. Whatever your goals, our optimization experts are ready to help you plan the most strategically efficient and cost effective “next step” of your cloud transformation.

And that’s just the beginning

If you think that AWS Savings Plans may benefit your new or existing AWS deployment, contact us to jumpstart an analysis.

-Jeff Collins, Cloud Optimization Product Management


Well-Architected Framework Reviews

“Whatever you do in life, surround yourself with smart people who argue with you.” – John Wooden

Many AWS customers and practitioners have leveraged the Well-Architected Framework methodology in building new applications or migrating existing applications. Once a build or migration is complete, how many companies implement Well-Architected Framework reviews and perform those reviews regularly? We have found that many companies today do not conduct regular Well Architected Framework reviews and as a result, potentially face a multitude of risks.

What is a Well-Architected Framework?

The Well-Architected Framework is a methodology designed to provide high-level guidance on best practices when using AWS products and services. Whether building new or migrating existing workloads, security, reliability, performance, cost optimization, and operational excellence are vital to the integrity of the workload and can even be critical to the success of the company. A review of your architecture is especially critical when the rate of innovation of new products and services are being created and implemented by Cloud Service Providers (CSP).

2nd Watch Well-Architected Framework Reviews

At 2nd Watch, we provide  Well-Architected Framework reviews for our existing and prospective clients. The review process allows customers to make informed decisions about architecture decisions, the potential impact those decisions have on their business, and tradeoffs they are making. 2nd Watch offers its clients free Well-Architected Framework reviews—conducted on a regular basis—for mission-critical workloads that could have a negative business impact upon failure.

Examples of issues we have uncovered and remediated through Well-Architected Reviews:

  • Security: Not protecting data in transit and at rest through encryption
  • Cost: Low utilization and inability to map cost to business units
  • Reliability: Single points of failure where recovery processes have not been tested
  • Performance: A lack of benchmarking or proactive selection of services and sizing
  • Operations: Not tracking changes to configuration management on your workload

Using a standard based methodology, 2nd Watch will work closely with your team to thoroughly review the workload and will produce a detailed report outlining actionable items, timeframes, as well as provide prescriptive guidance in each of the key architectural pillars.

In reviewing your workload and architecture, 2nd Watch will identify areas of improvement, along with a detailed report of our findings. A separate paid engagement will be available to clients and prospects who want our AWS Certified Solutions Architects and AWS Certified DevOps Engineer Professionals to remediate our findings. To schedule your free Well-Architected Framework review, contact 2nd Watch today.

 

— Chris Resch, EVP Cloud Solutions, 2nd Watch


Budgets: The Simple Way to Encourage Cloud Cost Accountability

Controlling costs is one of the grea challenges facing IT and Finance managers today.  The cloud, by nature, makes it easy to spin up new environments and resources that can cost thousands of dollars each month. And, while there are many ways to help control costs, one of the simplest and most effective methods is to set and manage cloud spend-to-budget. While most enterprise budgets are set at a business unit or department, for cloud spend, mapping that budget down to the workload can establish strong accountability within the organization.

One popular method that workload owners use to manage spend is to track month-over-month cost variances.  However, if costs do not drastically increase from one month to another, this method does very little to control spend. It is only until a department is faced with budget issues that workload owners work diligently to reduce costs.  That’s because, when budgets are set for each workload, owners become more aware of how their cloud spend impacts the company financials and tend to more carefully manage their costs.

In this post, we provide four easy steps to help you manage workload spend-to-budget effectively.

Step 1: Group Your Cloud Resources by Workload and Environment

Use a financial management tool such as 2nd Watch CMP Finance Manager to group your cloud resources by workload and its environment (Test, Dev, Prod).  This can easily be accomplished by creating a standard where each workload/environment has its own cloud account, or by using tags to identify the resources associated with each workload. If using tags, use a tag for the workload name such as workload_name: and a tag for the environment such as environment:. More tagging best practices can be found here.

Step 2: Group Your Workloads and Environments by Business Group

Once your resources are grouped by workload/environment, CMP Finance Manager will allow you to organize your workload/environments into business groups. For example:

a. Business Group 1
i. Workload A
1. Workload A Dev
2. Workload A Test
3. Workload A Prod
ii. Workload B
1. Workload B Dev
2. Workload B Test
3. Workload B Prod
b. Business Group 2
i. Workload C
1. Workload C Dev
2. Workload C Test
3. Workload C Prod
ii. Workload D
1. Workload D Dev
2. Workload D Test
3. Workload D Prod

Step 3: Set Budgets

At this point, you are ready to set up budgets for each of your workloads (each workload/environment and the total workload as you may have different owners). We suggest you set annual budgets aligned to your fiscal year and have the tool you use programmatically recalculate the budget at the end of each month with the amount remaining in your annual budget.

Step 4: Create Alerts

The final step is to create alerts to notify owners and yourself when workloads either have exceeded or are on track to exceed the current month or annual budget amount.  Here are some budget notifications we recommend:

  1. ME forecast exceeds month budget
  2. MTD spend exceeds MTD budget
  3. MTD spend exceeds month budget
  4. Daily spend exceed daily budget
  5. YE forecast exceeds year budget
  6. YTD spend exceeds YE budget

Once alerts are set, owners can make timely decisions regarding spend.  The owner can now proactively shift to spot instances, purchase reserved instances, change instance sizes, park the environment when not in use, or even refactor the application to take advantage of cloud native services like AWS Lambda.

Our experience has shown that enterprises that diligently set up and manage spend-to-budget by workload have more control of their costs and ultimately, spend less on their cloud environments without sacrificing user experience.

 

–Timothy Hill, Senior Product Manager, 2nd Watch


Cloud Cost Optimization with AWS

AWS regularly cuts customer cost by reducing the price of their services.  This happened most recently with the price reduction of C4, M4 and R3 instances.  These instances saw a 5% price cut when running on Linux.  This was their 51st price reduction.  Customers are clearly benefiting from the scale that AWS can bring to the market.  Spot Instances and Reserved Instances are another way customers can significantly reduce the cost to run their workloads in the cloud.

Sometimes these cost savings are not as obvious, but they need to be understood and measured when doing a TCO calculation.  AWS recently announced Certificate Manager.  Certificate Manager allows you to request new SSL/TLS certificates and then manage them with automated renewals.  The best part is that the service is free!  Many vendors charge hundreds of dollars for new certificates, and AWS is now offering it for free.  The automated renewal could also save you time and money while preventing costly outages.  Just ask the folks over at Microsoft how costly a certificate expiring can be.

Another way AWS reduces the cost to manage workloads is by offering new features in an existing service.  S3 Standard – Infrequent Access is an example of this.  AWS offered the same eleven 9s of durability while reducing availability from four 9s to three.  Customers who are comfortable going from 52 minutes of downtime a year to 8.5 hours of downtime per year for objects that don’t need the same level of availability can save well over 50%, even at the highest usage levels.  When you add features like encryption, versioning, cross-region replications and others, you start to see the true value.  Building and configuring these features yourself in a private cloud or in your own infrastructure can be costly add-ons.  AWS often offers these add-ons for free or only charges for the associated use, like the storage cost for cross-region replication.

Look beyond CPUs, memory, and bytes on disk when calculating the savings you will get with a move to AWS.  Explore the features and services you cannot offer your business from within your own datacenter or colocation facility.  Find a partner like 2nd Watch to help you manage and optimize your cloud infrastructure for long-term savings.

-Chris Nolan, Director of Product


Accurate Enterprise Cloud Cost Tracking & Allocation

AWS enables enterprises to trade capital expense for variable expense, lower operating costs and increase speed and agility. As enterprises begin to deploy cloud services across their business, it is critical to have a standardized approach to allocate usage costs to the appropriate department or cost center. By tracking costs at the cost center level, Enterprises gain visibility throughout their organization – and specifically who is spending precious IT funds.

To allocate costs, usage must first be grouped.  AWS provides two methods to group usage; Resources Tags and AWS accounts. Each method is useful but also comes with downsides.

Using AWS Tagging to group usage

  • Grouping by tag enables enterprises to run all of their workloads (applications) in a single AWS account, simplifying management within the AWS console.
  • A tagging schema needs to be created, universally deployed and tightly controlled.
  • Care has to be taken to ensure all individual AWS resources are tagged properly as any mistake in tagging will cause a resource to be left out of the group and not reported properly.
  • Many AWS resources are un-tagable, which will require the creation and maintenance of a separate cost distribution scheme to allocate those costs across the enterprise.
  • Reserved Instance (RI) discounted usage pricing cannot be linked to a single tag group and can result in significant costing inaccuracies.

Using Multiple AWS Accounts to group usage

  • Using individual AWS accounts for each workload provides the most accurate and detailed reporting of costs and usage.
  • By creating a separate AWS account for each workload, enterprises can track all associated costs (including RIs) and allocate them to cost centers, departments and/or business units.
  • When using AWS accounts to group usage, each account must be manually set up.
  • There is no method of sharing resources, such as databases, with multiple workloads as each workload is located in separate AWS accounts.

Given the challenges of both “account based” and “tag based” grouping, we have found that the tracking methodology needs be aligned to the applications or workloads.  For deployments where the resources are 100% dedicated to a specific workload, grouping by AWS accounts is ideal as it is the only way to ensure fully accurate costing. Using AWS tagging should be used when you need to share resources across multiple workloads, however enterprises must note that the costing will not be 100% accurate when using tag groups.

Tracking and Allocating Costs for Workloads with Dedicated Resources

As stated above, workloads that do not need to share resources should be set up in unique AWS accounts.  This can be accomplished by establishing individual AWS accounts for each workload and mapping them directly to your enterprise organizational structure. The example below illustrates how a complex enterprise can organize its cloud expenses and provide showback or chargeback reports across the enterprise.

 

In this example, the enterprise would receive two bills for their cloud usage – Business Unit 1 and Business Unit 2.  Under each business unit there are multiple levels of cost centers that roll up to each subsequent higher level – which is typical with many Enterprise organizations.  In this example, AWS accounts are created for each project/workload then rolled up to provide consolidated usage information by department and business unit. This type of structure enables:

  • The owners at the “resources and workload cost accrual and tracking” levels to track their individual usage by AWS accounts, which captures 100% of the cost associated with each AWS account
  • The management of department level to view the consolidated usage for their respective cost centers and workloads
  • The management of each business unit to view usage by department and AWS account and receive a bill for its entire consolidated usage

This provides a reliable and accurate methodology to track and allocate cloud usage based on your distinct enterprise organizational structure. It does, however, require a disciplined approach to creating new projects and updating your expense management platform to provide executive-level dashboards and the ability to drill-down to detailed consumption reports by cost center.  This enables Enterprise IT to provide executive-level transparency while keeping excessive resource consumption under control and reduce IT costs.

Tracking and Allocating Costs for Workloads with Shared Resources

In many organizations there is a need to share key resources, such as databases, across multiple workloads. In these cases it is a best practice to use AWS tags to group your expenses. This method requires careful set up of resources and the creation of a schema to allocate shared resources and resources that cannot be tagged across the enterprise.

Tagging allows enterprises to assign its own metadata to each tag-able resource. Tags do not have any semantic meaning to AWS resources and are interpreted strictly as a string of characters. Tags are made up of both a “Key” and a “Value”. AWS allows up to 10 Keys for each resource, and each Key can have can have unlimited values enabling very detailed grouping of resources.  Tagging should be set up based on the needs of the organization and the AWS architecture design. The image below illustrates how to establish a tagging scheme for a 2-Tier Auto-scalable Web Application.

As the project moves from Web Sandbox to Web Staging to Web Production, you can use tags to track usage.  When the application is in the Sandbox all resources are tagged with the key “Web Sandbox” and the appropriate value (Environment, Owner, App and/or IT Tower). When the project moves to “Web Staging” you simply replace the original key and values with the ones associated with the next step in development.

While there is no one-size-fits-all solution to AWS expense management, deploying one or both of these methods can provide you the visibility necessary to successfully meet the tracking and analytical needs of your enterprise.

-Tim Hill, Product Manager


Planning for Cost Management with Amazon Web Services

As firms progress through the transition from traditional IT to the AWS Cloud, there is often a moment of fear and anxiety related to managing cost. The integration and planning group has done an excellent job of selecting the right consulting partner. Contracts have been negotiated by legal and procurement. Capital funding has been allocated to cover the cost of application migrations. Designs are accepted and the project manager has laid out the critical path to success. Then at the last hour, just before production launch, the finance team chimes in – “How are we going to account for each application’s monthly usage?”

So much planning and preparation is put into integration, because we’ve gone through this process with each new application. But moving to the public cloud presents a new challenge, one that’s easily tackled with a well-developed model for managing cost in a usage-based environment.

AWS allows us to deploy IaaS (Infrastructure as a Service), and that infrastructure is shared across all of our applications in the cloud. With the proper implementation of AWS Resource Tags, cloud resources can be associated with unique applications, departments, environments, locations and any other category for which cost-reporting is essential.

Firms must have the right dialog in the design process with their cloud technology partner. Here’s an outline of the five phases of the 2nd Watch AWS Tagging Methodology, which has been used to help companies plan for cloud-based financial management:

Phase 1: Ask Critical Questions – Begin by asking Critical Questions that you want answered about utilization, spending and resource management. Consider ongoing projects, production applications, and development environments. Critical Questions can include: Which AWS resources are affecting my overall monthly bill? What is the running cost of my high availability features like warm standby, pilot light or geo-redundancy? How do architectural changes or application upgrades change my monthly usage?

Phase 2: Develop a Tagging Strategy – The Cloud Architect will interpret these questions and develop a tagging strategy to meet your needs. The strategy then turns into a component of the Detailed Design and later implemented during the build project. During this stage it’s important to consider the enforcement of standards within the organization. Configuration drift is when other groups don’t use standardized AWS Resource Tags, or those defined in a naming convention. Later when it’s time for reporting, this will create problems for accounting and finance.

Phase 3: Determine Which AWS Resources Are In Scope – Solicit feedback from your internal accounting department and application owners. Create a list of AWS Resources and applications that need to be accounted for. Refer frequently to AWS online documentation because the list of taggable resource types is updated often.

Phase 4: Define How Chargebacks and Showbacks Will Be Used – Determine who will receive usage-based reports for each application, environment or location. Some firms have adopted a Chargeback model in which the accounting team bills the internal groups who have contributed to the month’s AWS usage. Others have used these reports for Showback only, where the usage & cost data is used for planning, forecasting and event correlation. 2W Insight offers a robust reporting engine to allow 2nd Watch customers the ability to create, schedule and send reports to stakeholders.

Phase 5: Make Regular Adjustments For Optimization – Talk to your Cloud Architect about automation to eliminate configuration drift. Incorporate AWS tagging standards into your cloud formation templates. Regularly review tag keys and values to identify non-standard use cases. And solicit the feedback of your accounting team to ensure the reports are useful and accurate.

Working with an AWS Premier Consulting Partner is critical to designing for best practices like cost management. Challenge your partner and ask for real-world examples of AWS Resource Tagging strategies and cost reports. Planning to manage costs in the cloud is not a step that should be skipped. It’s critical to incorporate your financial reporting objectives into the technical design early, so that they can become established, standardized processes for each new application in the cloud.

For more information, please reach out to Zachary Bonugli zbonugli@2ndwatch.com.

– Zachary Bonugli, Global Account Manager


Amazon Updates Reserved Instances Model

In an effort to simplify the Reserved Instances (RI) model, AWS announced yesterday a change in the model based on customer feedback and purchasing patterns.

AWS will move from three types of RIs – Fixed Price: Heavy, Medium and Light Utilization RIs – to a single type with three payment options. All continue to provide capacity assurance and discounts when compared to On-Demand prices.

The three new payment options give you flexibility to pay for the entire RI upfront, a portion of the RI upfront and a portion over the term, or nothing upfront and the entire RI over the course of the term.

What does this mean for you? These changes will really benefit predictable workloads that are running >30% of the time.  In cases where usage is less consistent, it may be better for companies to stick with on-demand rates.  We’ve developed some related research on usage trends. Meanwhile, our role as a top AWS partner continues to be simplifying procurement of all AWS products and services.

Download the AWS Usage infographic

Read more about the new RI model.