Cloud Economics: Empowering Organizations for Success

Cloud economics is crucial for an organization to make the most out of their cloud solutions, and business leaders need to prioritize shifting their company culture to embrace accountability and trackability.

When leaders hear the phrase “cloud economics,” they think about budgeting and controlling costs. Cost management is an element of cloud economics, but it is not the entire equation. In order for cloud economics to be implemented in a beneficial way, organizations must realize that cloud economics is not a budgetary practice, but rather an organizational culture shift.

The very definition of “economics” indicates that the study is more than just a numbers game. Economics is “a science concerned with the process or system by which goods and services are produced, sold, and bought.” The practice of economics involves a whole “process or system” where actors and actions are considered and accounted for. 

With this definition in mind, cloud economics means that companies are required to look at key players and behaviors when evaluating their cloud environment in order to maximize the business value of their cloud. 

Once an organization has fully embraced the study of cloud economics, it will be able to gain insight into which departments are utilizing the cloud, what applications and workloads are utilizing the cloud, and how all of these moving parts contribute to greater business goals. Embodying transparency and trackability enables teams to work together in a harmonious way to control their cloud infrastructure and prove the true business benefits of the cloud. 

If business leaders want to apply cloud economics to their organizations, they must go beyond calculating cloud costs. They will need to promote a culture of cross-functional collaboration and honest accountability. Leadership should prioritize and facilitate the joint efforts of cloud architects, cloud operations, developers, and the sourcing team. 

Cloud economics will encourage communication, collaboration, and change in culture, which will have the added benefit of cloud cost management and cloud business success. 

Where do companies lose control of their cloud costs?

When companies lose control of cloud costs, the business value of the cloud disappears as well. If the cloud is overspending and there is no business value to show for, how are leaders supposed to feel good about their cloud infrastructure? Going over budget with no benefits would not be a sound business case for any enterprise in any industry. 

Out-of-control cloud spending is quite easy, and it usually boils down to poor business decisions that come from leadership. Company leaders should first recognize that they wield the power to manage cloud costs and foster communication between teams. If they are making poor business decisions, like prioritizing speedy delivery over well-written code or not promoting transparency, then they are allowing practices that negatively impact cloud costs. 

When leaders push their teams to be fast rather than thorough, it creates technical debt and tension between teams. The following sub-optimal practices can happen when leadership is not prioritizing cloud cost optimizations:

  • Developers ignore seemingly small administrative tasks that are actually immensely important and consequential, like rightsizing infrastructure or turning off inactive applications. 
  • Architects select suboptimal designs that are easier and faster to run but are more expensive to implement.
  • Developers use inefficient code and crude algorithms in order to ship a feature faster, but then fail to consider performance optimizations to execute less resource consumption.
  • Developers forgo deployment automation that would help to automatically rightsize.
  • Developers build code that isn’t inherently cloud-native, and therefore not cloud-optimized.
  • Finance and procurement teams are only looking at the bottom line and don’t fully understand why the cloud bill is so high, therefore, creating tension between IT/dev and finance/procurement. 

When these actions compound, it leads to an infrastructure mess that is incredibly difficult to clean up. Poorly implemented bad designs that are not easily scalable will require a significant amount of development time; therefore, leaving companies with inefficient cloud infrastructure and preposterously high cloud costs.

Furthermore, these high and unexplained cloud bills cause rifts between teams and are detrimental to collaboration efforts. Lack of accountability and visibility causes developer and finance teams to have misaligned business objectives. 

Poor cloud governance and culture are derived from leadership’s misguided business decisions and muddled planning. If leaders don’t prioritize cloud cost optimization through cloud economics, the business value of the cloud is diminished, and company collaboration will suffer. Developers and architects will continue to execute processes that create high cloud costs, and finance and procurement teams will forever be at odds with the IT team.

What are the benefits of cloud economics?

Below are a few common business pitfalls that leaders can easily address if they embrace the practice of cloud economics:

  1. Cost Savings: The cloud eliminates the need for upfront hardware investments and reduces ongoing maintenance and operational costs. Organizations only pay for the resources they use, allowing for cost optimization and scalability.
  2. Infrastructure Efficiency: Cloud providers can achieve economies of scale by consolidating resources and optimizing data center operations. This results in higher infrastructure efficiency, reducing costs for businesses compared to managing their own on-premises infrastructure.
  3. Agility and Speed: The cloud enables rapid deployment and provisioning of resources, reducing the time and cost associated with traditional IT infrastructure setup. This agility allows businesses to quickly adapt to changing market demands and launch new products or services faster.
  4. Global Reach and Accessibility: Cloud services provide a global infrastructure footprint, allowing businesses to easily expand their operations into new regions without the need for physical infrastructure investments. This global reach enables faster access to customers and markets.
  5. Scalability and Elasticity: Cloud services offer the ability to scale resources up or down based on demand. This scalability eliminates the need for overprovisioning and ensures businesses have the necessary resources to handle peak workloads without incurring additional costs during idle periods.
  6. Improved Resource Utilization: Cloud providers optimize resource utilization through virtualization and efficient resource management techniques. This leads to higher resource utilization rates, reducing wasted capacity and maximizing cost efficiency.
  7. Business Continuity and Disaster Recovery: Cloud services provide built-in redundancy and disaster recovery capabilities, reducing the need for costly backup infrastructure and complex recovery plans. This improves business continuity while minimizing the financial impact of potential disruptions.
  8. Innovation and Competitive Edge: The cloud enables rapid experimentation and innovation, allowing businesses to quickly test and launch new products or services. This agility gives organizations a competitive edge in the market, driving revenue growth and differentiation.
  9. Focus on Core Business: By offloading infrastructure management to cloud providers, businesses can focus more on their core competencies and strategic initiatives. This shift in focus improves productivity and resource allocation, leading to better economic outcomes.
  10. Decentralized Costs and Budgets: Knowing budgets may seem obvious, but more often than not, leaders don’t even know what they are spending on the cloud. This is usually due to siloed department budgets and a lack of disclosure. Cloud economics requires leaders to create visibility into their cloud spend and open channels of communication about allocation, budgeting, and forecasting.
  11. Lack of Planning and Unanticipated Usage: If organizations don’t plan, then they will end up over-utilizing the cloud. Failing to forecast or proactively budget cloud resources will lead to using too many unnecessary and/or unused resources. With cloud economics, leaders are responsible for strategies, systems, and internal communications to connect cloud costs with business goals.
  12. Non-Committal Mindset: This issue is a culmination of other problems. If business leaders are unsure of what they are doing in the cloud, they are less willing to commit to long-term cloud contracts. Unwillingness to commit to contracts is a missed opportunity for business leaders because long-term engagements are more cost-friendly. Once leaders have implemented cloud economics to inspire confidence in their cloud infrastructure, they can assertively evaluate purchasing options in the most cost-effective way.

What are the steps to creating a culture around cloud economics?

Cloud economics is a study that goes beyond calculating and cutting costs. It is a company culture that is a cross-functional effort. Though it seems like a significant undertaking, the steps to get started are quite manageable. Below is a high-level plan that business leaders must take charge of to create a culture around prioritizing cloud economics:

1. Inform: Stage one consists of lots of data collecting and understanding of the current cloud situation. Company leaders will need to know what the trust costs of the cloud are before they can proceed forward. Creating visibility around the current state is also the first step to creating a culture of communication and transparency amongst teams and stakeholders.

2. Optimize: Once the baseline is understood, leadership can analyze the data in order to optimize cloud costs. The visibility of the current state is crucial for teams and leadership to understand what they are working with and how they can optimize it. This stage is where a lot of conversations happen amongst teams to come up with an optimization action plan. It requires teams and stakeholders to communicate and work together, which ultimately builds trust among each other.

3. Operate: Finally, the data analysis and learnings can be implemented. With the optimization action plan, leaders should know what areas of the cloud demand optimization first and how to optimize these areas. At this point in the process, teams and stakeholders are comfortable with cross-collaboration and honest communications amongst each other. This opens up a transparent feedback loop that is necessary for continuous improvement. 

Conclusion

The entire organization stands to gain when cloud economics is prioritized. A cost-efficient cloud infrastructure will lead to improved productivity, cross-functional collaboration between teams, and focused efforts towards greater business objectives. 

Ready to take control of your cloud costs and maximize the value of your cloud infrastructure? Contact 2nd Watch today and let our team of experts help you implement cloud economics within your organization. As a trusted partner for enterprise-level services and support, we have the expertise to assist you in planning, analyzing, and recommending strategies to optimize your cloud costs and drive business objectives. Don’t let cloud spending go unchecked. Take charge of your cloud economics by reaching out to a 2nd Watch cloud expert now

Mary Fellows | Director of Cloud Economics at 2ND Watch


Cloud Autonomics and Automated Management and Optimization: Update

The holy grail of IT Operations is to achieve a state where all mundane, repeatable remediations occur without intervention, with a human only being woken for any action that simply cannot be automated.  This allows not only for many restful nights, but it also allows IT operations teams to become more agile while maintaining a proactive and highly-optimized enterprise cloud.  Getting to that state seems like it can only be found in the greatest online fantasy game, but the growing popularity of “AIOps” gives great hope that this may actually be closer to a reality than once thought.

Skeptics will tell you that automation, autonomics, orchestration, and optimization have been alive and well in the datacenter for more than a decade now. Companies like Microsoft with System Center, IBM with Tivoli, and ServiceNow are just a few examples of autonomic platforms that harness the ability to collect, analyze and make decisions on how to act against sensor data derived from physical/virtual infrastructure and appliances.  But when you couple these capabilities with advancements brought through AIOps, you are able take advantage of the previously missing components by incorporating big data analytics along with artificial intelligence (AI) and Machine Learning (ML).

As you can imagine, these advancements have brought an explosion of new tooling and services from Cloud ISV’s thought to make the once utopian Autonomic cloud a reality. Palo Alto Network’s Prisma Public Cloud product is great example of a technology that functions with autonomic capabilities.  The security and compliance features of Prisma Public Cloud are pretty impressive, but it also has a component known as User and Entity Behavior Analytics (UEBA).  UEBA analyzes user activity data from logs, network traffic and endpoints and correlates this data with security threat intelligence to identify activities—or behaviors—likely to indicate a malicious presence in your environment. After analyzing the current state of the vulnerability and risk landscape, it reports current risk and vulnerability state and derives a set of guided remediations that can be either performed manually against the infrastructure in question or automated for remediation to ensure a proactive response, hands off, to ensure vulnerabilities and security compliance can always be maintained.

Another ISV focused on AIOps is MoogSoft who is bringing a next generation platform for IT incident management to life for the cloud.  Moogsoft has purpose-built machine learning algorithms that are deigned to better correlate alerts and reduce much of the noise associated with all the data points. When you marry this with their Artificial Intelligence capabilities for IT operations, they are helping DevOps teams operate smarter, faster and more effectively in terms of automating traditional IT operations tasks.

As we move forward, expect to see more and more AI and ML-based functionality move into the core cloud management platforms as well. Amazon recently released AWS Control Tower to aide your company’s journey towards AIOps.  While coming with some pretty incredible features for new account creation and increased multi-account visibility, it uses service control policies (SCPs) based upon established guardrails (rules and policies).  As new resources and accounts come online, Control Tower can force compliance with the policies automatically, preventing “bad behavior” by users and eliminating the need to have IT configure resources after they come online. Once AWS Control Tower is being utilized, these guardrails can apply to multi-account environments and new accounts as they are created.

It is an exciting time for autonomic platforms and autonomic systems capabilities in the cloud, and we are excited to help customers realize the many potential capabilities and benefits which can help automate, orchestrate and proactively maintain and optimize your core cloud infrastructure.

To learn more about autonomic systems and capabilities, check out Gartner’s AIOps research and reach out to 2nd Watch. We would love to help you realize the potential of autonomic platforms and autonomic technologies in your cloud environment today!

-Dusty Simoni, Sr Product Manager

 

 

 


5 Steps to Cloud Cost Optimization: Hurdles to Optimization are Organizational, Not Technical

[et_pb_section bb_built=”1″ admin_label=”section” _builder_version=”3.22.3″][et_pb_row admin_label=”row” _builder_version=”3.22.3″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.0.74″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”]

In my last blog post, I covered the basics of cloud cost optimization using the Six Pillars model, and focused on the ‘hows’ of optimization and the ‘whys’ of its importance. In this blog, I’d like to talk about what comes next: preparing your organization for your optimization project. The main reason most clients delay and/or avoid confronting issues regarding cloud optimization is because it’s incredibly complex. Challenges from cloud sprawl to misaligned corporate priorities can cause a project to come to a screeching halt. Understanding the challenges before you begin is essential to getting off on the right foot.

 

5 Main Cloud Cost Optimization Challenges

Here are the 5 main challenges we’ve seen when implementing a cloud cost optimization project:

  • Cloud sprawl refers to the unrestricted, unregulated creation and use of cloud resources; cloud cost sprawl, therefore, refers to the costs incurred related to the use of each and every cloud resource (i.e., storage, instances, data transfer, etc.). This typically presents as decentralized account or subscription management.
  • Billing complexity, in this case, specifically refers to the ever-changing and variable billing practices of cloud providers and the invoices they provide you. Considering all possible variable configurations when creating many solutions across an organization, Amazon Web Services (AWS) alone has 500,000 plus SKUs you could see on any single invoice. If you cannot make sense of your bill up front, your cost optimization efforts will languish.
  • Lack of Access to Data and Application Metrics is one of the biggest barriers to entry. Cost optimization is a data driven exercise. Without billing data and application metrics over time, many incorrect assumptions end up being made resulting in higher cost.
  • Misaligned policies and methods can be the obstacle that will make or break your optimization project. When every team, organization or department has their own method for managing cloud resources and spend, the solution becomes more organizational change and less technology implementation. This can be difficult to get a handle on, especially if the teams aren’t on the same page with needing to optimize.
  • A lack of incentives may seem surprising to many, after all who doesn’t want to save money, however it is the number one blocker in large enterprises that we have experienced toward achieving optimization end goals. Central IT is laser focused on cost management and application/business units are focused more on speed and innovation. Both goals are important, but without the right incentives, process, and communication this fails every time. Building executive support to directly reapply realized optimization savings back to the business units to increase their application and innovation budgets is the only way to bridge misaligned priorities and build the foundation for lasting optimization motivation.

According to many cloud software vendors, waste accounts for 30% to 40% of all cloud usage. In the RightScale State of the Cloud Report 2019, a survey revealed that 35% of cloud spend is wasted. 2nd Watch has found that within large enterprise companies, there can be up to 70% savings through a combination of software and services.  It often starts by just implementing a solid cost optimization methodology.

When working on a project for cloud cost optimization, it’s essential to first get the key stakeholders of an organization to agree to the benefits of optimizing your cloud spend. Once the executive team is onboard and an owner is assigned, the path to optimization is clear covering each of the 6 Pillars of Optimization.

Path to Cloud Optimization

Step One: Scope and Objectives

As with any project, you first want to identify the goals and scope and then uncover the current state environment. Here are a few questions to ask to scope out your work:

  • Overall Project Goal – Are you focused on cost savings, workload optimization, uptime, performance or a combination of these factors?
  • Budget – Do you want to sync to a fiscal budget? What is the cycle? What budget do you have for upfront payments? Do you budget at an account level or organization level?
  • Current State – What number of instances and accounts do you have? What types of agreements do you have with your cloud provider(s)?
  • Growth – Do you grow seasonally, or do you have planned growth based on projects? Do you anticipate existing workloads to grow or shrink overtime?
  • Measurement – How do you currently view your cloud bill? Do you have detailed billing enabled? Do you have performance metrics over time for your applications?
  • Support – Do you have owners for each application? Are people available to assess each app? Are you able to shutdown apps during off hours? Do you have resources to modernize applications?

Step Two: Data Access

One of the big barriers to a true optimization is gaining access to data. In order to gather the data (step 3) you first need to get the team onboard to grant you or the optimization project team access to the information.

During this step, get your cross-functional team excited about the project, share the goals and current state info you gathered in the previous step and present your strategy to all your stakeholders.

Stakeholders may include application owners, cloud account owners, IT Ops, IT security and/or developers who will have to make changes to applications.

Remember, data is key here, so find the people who own the data. Those who are monitoring applications or own the accounts are the typical stakeholders to involve. Then share with them the goals and bring them along this journey.

Step Three: Data Management

Data is grouped into a few buckets:

  1. Billing Data – Get a clear view of your cloud bill over time.
  2. Metrics Data – CPU, I/O, Bandwidth and Memory for each application over time is essential.
  3. Application Data – Conduct interviews of application owners to understand the nuances. Graph out risk tolerance, growth potential, budget constraints and identify the current tagging strategy.

A month’s worth of data is good, though three months of data is much better to understand the capacity variances for applications and how to project into the future.

Step Four: – Visualize and Assess Data Usage

This step takes a bit of skill. There are tools like CloudHealth that can help you understand your cost and usage in cloud. Then there are other tools that can help you understand your application performance over time. Using the data from each of these sources and collaborating them across the pillars of optimization is essential to understanding where you can find the optimal cost savings.

I often recommend bringing in an optimization expert for this step. Someone with a data science, cloud and accounting background can help you visualize data and find the best options for optimization.

Step Five: Remediation Plan

Now that you know where you can save, take that information and build out a remediation plan. This should include addressing workloads in one or more of the pillars.

For example, you may shut down resources at night for an application and move it to another family of instances/VMs based on current pricing.

Your remediation should include changes by application as well as:

  1. RI Purchase Strategy across the business on a 1 or 3-year plan.
  2. Auto-Parking Implementation to part your resources when they’re not in use.
  3. Right-Sizing based on CPU, memory, I/O.
  4. Family Refresh or movement to the newer, more cost-effective instance families or VM-series.
  5. Elimination of Waste like unutilized instances, unattached volumes, idle load balancers, etc.
  6. Storage reassessment based on size, data transfer, retrieval time and number of retrieval requests.
  7. Tagging Strategy to track each instance/VM and track it back to the right resources.
  8. IT Chargeback process and systems to manage the process.

Remediation can take anywhere from one month to a year’s time based on organization size and the support of application teams to make necessary changes.

Download our ‘5 Steps to Cloud Cost Optimization’ infographic for a summary of this process.

End Result

With as much as 70% savings possible after implementing one of these projects, you can see the compelling reason to start. A big part of the benefits is organizational and long lasting including:

  • Visibility to make the right cloud spending decisions​
  • Break-down of your cloud costs by business area for chargeback or showback​
  • Control of cloud costs while maintaining or increasing application performance​
  • Improved organizational standards to keep optimizing costs over time​
  • Identification of short and long-term cost savings across the various optimization pillars:

Many companies reallocate the savings to innovative projects to help their company grow. The outcome of a well-managed cloud cost optimization project can propel your organization into a focus on cloud-native architecture and application refactoring.

Though complex, cloud cost optimization is an achievable goal. By cross-referencing the 6 pillars of optimization with your organizations policies, applications and teams, you can quickly find savings from 30 – 40% and grow from there.

By addressing project risks like lack of awareness, decentralized account management, lack of access to data and metrics, and lack of clear goals, your team can quickly achieve savings.

Ready to get started with your cloud cost optimization? Schedule a Cloud Cost Optimization Discovery Session for a free 2-hour session with our team of experts.

-Stefana Muller, Sr Product Manager

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]


The 6 Pillars of Cloud Cost Optimization

Let me start by painting the picture: You’re the CFO. Or the manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources. Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.

Except you.

The promise of moving to cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.

Source: Amazon Web Services (AWS). “Understanding Consolidated Bills – AWS Billing and Cost Management”. (2017). Retrieved from https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/con-bill-blended-rates.html

From Reserved Instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimize your spend.

It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.

Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimization. This where 2nd Watch and our Six Pillars of Cloud Optimization come in.

The Six Pillars of Cloud Cost Optimization

1. Reserved Instances (RIs)

AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.

Most cloud cost optimizations, erroneously, begin and end here—providing you and your organization with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through 3rd party optimization tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.

Two of the major factors to consider with cloud cost optimization are Risk Tolerance and Centralized RI Management portfolios.

  • Risk Tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organization take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings.
  • Centralized RI Management portfolios allow for deeper RI coverage across organizational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralized, whole organization approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program.

Once you identify your risk tolerance and centralize your approach to RI’s you can take advantage of this optimization option. Though, an RI-only optimization strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the 5 other optimization pillars to achieve the most effective cloud cost optimization.

2. Auto-Parking

One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.

3. Right-Sizing

Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyze resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.

Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.

Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimization pillars?

4. Family Refresh

Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.

Up-to-date knowledge of the instance types/families being used within your organization is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.

Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimization. As a result, for many organizations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimization service offering.

5. Waste

Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilized instances. Some examples of waste in the cloud include:

  • AWS RDSs or Azure SQL DBs without a connection
  • Unutilized AWS EC2s
  • Azure VMs that were spun up for training or testing
  • Dated snapshots that are holding storage space that will never be useful
  • Idle load balancers
  • Unattached volumes

Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organizational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.

6. Storage

Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:

  • Size – How much storage do you need?
  • Data Transfer (bandwidth) – How often does your data need to move from one location to another?
  • Retrieval Time – How quickly do you need to access your data?
  • Retrieval Requests – How often do you need to access your data?

There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.

So, you see, each of these six pillars of cloud cost optimization houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimizing only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimization process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.

The good news is that the future is bright! Once you have completely assessed your current environment, taken the pillars into account, made the changes required to optimize your cloud, and found a method by which to make this process continuous, you can investigate optimization through application refactoring, ephemeral instances, spot instances and serverless architecture.

The promised cost savings of public cloud is reachable, if only you know where to look.

2nd Watch offers a Cloud Cost Optimization service that can help guide you through this process. Our Cloud Cost Optimization service is guaranteed to reduce your cloud computing costs by 20%,* increasing efficiency and performance. Our proven methodology empowers you to make data driven decisions in context, not relying on tools alone. Cloud cost optimization doesn’t have to be time consuming and challenging. Start your cloud cost optimization plan with our proven method for success at https://offers.2ndwatch.com/download-cloud-cost-optimization-datasheet

*To qualify for guaranteed 20% savings, must have at least $50,000/month cloud usage.

Stefana Muller, Sr. Product Manager


2nd Watch Enterprise Cloud Expertise On Display at AWS re:Invent 2017

AWS re:Invent is less than twenty days away and 2nd Watch is proud to be a 2017 Platinum Sponsor for the sixth consecutive year.  As an Amazon Web Services (AWS) Partner Network Premier Consulting Partner, we look forward to attending and demonstrating the strength of our cloud design, migration, and managed services offerings for enterprise organizations at AWS re:Invent 2017 in Las Vegas, Nevada.

About AWS re:Invent

Designed for AWS customers, enthusiasts and even cloud computing newcomers, the nearly week-long conference is a great source of information and education for attendees of all skill levels. AWS re:Invent is THE place to connect, engage, and discuss current AWS products and services via breakout sessions ranging from introductory and advanced to expert as well as to hear the latest news and announcements from key AWS executives, partners, and customers. This year’s agenda offers a full additional day of content for even more learning opportunities, more than 1,000 breakout sessions, an expanded campus, hackathons, boot camps, hands-on labs, workshops, expanded Expo hours, and the always popular Amazonian events featuring broomball, Tatonka Challenge, fitness activities, and the attendee welcome party known as re:Play.

2nd Watch at re:Invent 2017

 2nd Watch has been a Premier Consulting Partner in the AWS Partner Network (APN) since 2012 and was recently named a leader in Gartner’s Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide (March 2017). We hold AWS Competencies in Financial Services, Migration, DevOps, Marketing, and Commerce, Life Sciences and Microsoft Workloads, and have recently completed the AWS Managed Service Provider (MSP) Partner Program Audit for the third year in a row. Over the past decade, 2nd Watch has migrated and managed AWS deployments for companies such as Crate & Barrel, Condé Nast, Lenovo, Motorola, and Yamaha.

The 2nd Watch breakout session—Continuous Compliance on AWS at Scale—will be led by cloud security experts Peter Meister and Lars Cromley. The session will focus on the need for continuous security and compliance in cloud migrations, and attendees will learn how a managed cloud provider can use automation and cloud expertise to successfully control these issues at scale in a constantly changing cloud environment. Registered re:Invent Full Conference Pass holders can add the session to their agendas here.

In addition to our breakout session, 2nd Watch will be showcasing our customers’ successes in the Expo Hall located in the Sands Convention Center (between The Venetian and The Palazzo hotels).  We invite you to stop by booth #1104 where you can explore 2nd Watch’s Managed Cloud Solutions, pick up a coveted 2nd Watch t-shirt and find out how you can win one of our daily contest giveaways—a totally custom 2nd Watch skateboard!

Want to make sure you get time with one of 2nd Watch’s Cloud Journey Masters while at re:Invent?  Plan ahead and schedule a meeting with one of 2nd Watch’s AWS Professional Certified Architects, DevOps, or Engineers.  Last but not least, 2nd Watch will be hosting its annual re:Invent after party on Wednesday, November 29. If you haven’t RSVP’d for THE AWS re:Invent Partner Party, click here to request your invitation (Event has passed)

AWS re:Invent is sure to be a week full of great technical learning, networking, and social opportunities.  We know you will have a packed schedule but look forward to seeing you there!  Be on the lookout for my list of “What to Avoid at re:Invent 2017” in the coming days…it’s sure to help you plan for your trip and get the most out of your AWS re:Invent experience.

 

–Katie Laas-Ellis, Marketing Manager, 2nd Watch

 

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About 2nd Watch

2nd Watch is an AWS Premier tier Partner in the AWS Partner Network (APN) providing managed cloud to enterprises. The company’s subject matter experts, software-enabled services and cutting-edge solutions provide companies with tested, proven, and trusted solutions, allowing them to fully leverage the power of the cloud. 2nd Watch solutions are high performing, robust, increase operational excellence, decrease time to market, accelerate growth and lower risk. Its patent-pending, proprietary tools automate everyday workload management processes for big data analytics, digital marketing, line-of-business and cloud native workloads. 2nd Watch is a new breed of business which helps enterprises design, deploy and manage cloud solutions and monitors business critical workloads 24×7. 2nd Watch has more than 400 enterprise workloads under its management and more than 200,000 instances in its managed public cloud. The venture-backed company is headquartered in Seattle, Washington. To learn more about 2nd Watch, visit www.2ndwatch.com or call 888-317-7920.


How to Choose the Right Hyperscale Managed Service Provider (MSP)

One of the challenges that many businesses struggle to overcome is how to keep up with the massive (and on-going) changes in technology and implement best practices for managing them.  The Public Cloud­—in particular, Hyperscale Cloud providers like AWS—has ushered in a new era of IT technology. This technology changes rapidly and is designed to provide businesses with the building blocks that allow IT organizations to focus on innovation and growth, rather than mess with things that don’t differentiate their business.

A Hyperscale Managed Services Provider (MSP) can help address a very important gap for many businesses that struggle to:

  • Keep up with the frenetic pace of change in Public Cloud
  • Define and use best practices to achieve superior results
  • Manage their infrastructure the most efficient way possible

 

In most cases, Hyperscale MSPs have deep expertise, technology, and automated capabilities to deliver high-quality managed service on a hyperscale platform. And because Hyperscale MSPs are solely focused to deliver capabilities on the cloud IaaS and PaaS that today’s enterprises are using, they are well versed in the best practices and standards to achieve the right results for their clients.

So, how do you go about selecting the right MSP?  The answer to this question is critical because we believe choosing the right MSP is one of the most important decisions you will make when consuming the public cloud.  It is also important to note that some of the qualifications to look for when selecting a Hyperscale MSP for your business needs are obvious, while others are more elusive.  I’ve included a few suggestions below to keep in mind when evaluating and selecting the right Hyperscale MSP.

Expertise on the Platform of Your Choice

First and foremost, no two public cloud providers are the same.  Each provider implements MSP strategies differently—from infrastructure and redundancy, to automation and billing concepts.  Secondly, it isn’t enough for a provider to tell you they have a few applications running on the platform.  When looking to entrust someone with your most valuable assets, expertise is key!  An important KPI for measuring the capabilities of a MSP that many businesses overlook is the provider’s depth and breadth of experience. A qualified Hyperscale MSP will have the right certifications, accreditations, and certified engineer-to-customer ratios.  You may feel good about signing with a large provider because they claim a higher number of certified engineers than the smaller firms, until…you realize their certified engineer-to-customer ratio is out of whack.  Having 200 certified engineers means nothing if you have 5,000+ customers.  At 2nd Watch, we have more certified engineers than we do customers, and we like it that way.

The Focus is on Customer Value

This is an obvious recommendation, but it does have some nuances.  Many MSPs will simply take the “Your mess for less” approach to managing your infrastructure.  Our customers tell us that one of the reasons they chose 2nd Watch was our focus on the things that matter to them.  There are many MSPs that have technical capabilities to manage Cloud infrastructure but not all are able to focus in on how an enterprise wants to use the Public Cloud.  MSPs with the ability to understand their client’s needs and goals tailor their approach to work for the enterprise vs. making them snap to some preconceived notion of how these things should work or function.  Find an MSP that is willing to make the Public Cloud work the way you want it to and your overall experience, and the outcome, will be game changing.

Optimize, Optimize, Optimize

Moving to the Public Cloud is just the first step in the journey to realizing business value and transforming IT.  The Cloud is dynamic in nature, and due to that fact, it is important that you don’t rest on just a migration once you are using it.  New instance types, new services, or just optimizing what you are running today are great ways to ensure your infrastructure is running at top notch.  It is important to make sure your MSP has a strong, ongoing story about optimization and how they will provide it.  At 2nd Watch, we break optimization into 3 categories:  Financial Optimization, Technical Optimization and Operations Optimization.  It is a good idea to ask your MSP how they handle these three facets of optimization and at what cadence.  Keep in mind that some providers’ pricing structures can act as a disincentive for optimization.  For example, if your MSP’s billing structure is based on a percentage of your total cloud spend, and they reduce that bill by 30% through optimization efforts, that means they are now getting paid less, proportionately, and are likely not motivated to do this type of optimization on a regular basis as it hurts their revenue.  Alternatively, we have also seen MSPs charge extra for these types of services, so the key is to make sure you ask if it’s included and get details about the services that would be considered an extra charge.

Full Service

The final qualification to look for in a Hyperscale MSP is whether they are a full-service provider.  Too often, pure play MSPs are not able to provide a full service offering under their umbrella.  The most common reason is that they lack professional services to assess and migrate workloads or cloud architects to build out new functionality.

Our enterprise clients tell us that one of their major frustrations is having to work with multiple vendors on a project.  With multiple vendors, it is difficult to keep track of who is accountable and for what they are accountable.  Why would the vendor that is migrating be motivated to make sure the application is optimized for support if they aren’t providing the support?  I have heard horror stories of businesses trying to move to the cloud and becoming frustrated that multiple vendors are involved on the same workload, because the vendors blame each other for missing deadlines or not delivering key milestones or technical content.  Your business will be better served by hiring an MSP who can run the full cloud-migration process—from workload assessment and migration to managing and optimizing your cloud infrastructure on an ongoing basis.

In addition to the tips I have listed above, 2nd Watch recommends utilizing Gartner’s Magic Quadrant to help evaluate the various public cloud managed service providers available to you. Gartner positioned 2nd Watch in the Leaders quadrant of the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide for our completeness of vision and ability to execute.  You can download and read the full report here.

 

-Kris Bliesner, CTO

 

 

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document.