Innovation Scoring from 2nd Watch Boosts Cloud Optimization

Does this sound familiar? “You will move to the cloud, for right or wrong, because of a business imperative to get out of your data center, not tomorrow, but yesterday.” Or, “You’re sold on the idea that by migrating to the cloud, you’d be able to reduce your total cost of ownership (TCO), increase flexibility, and accelerate innovation projects.” The cloud practically sells itself, and as a result, you plan to ditch your legacy, on-premises technology and begin your cloud migration journey.

However, suppose you hop into the cloud without a defined strategy and approach. In that case, you’ll experience cloud sprawl, and spiraling cloud costs will negate the touted benefits of the cloud. This sort of “blind faith” in all the cloud offers is a common mistake many business leaders make. It has prevented you from considering cloud management and economics as part of your cloud migration strategy.  

Without cloud cost governance, your organization will suffer O2: Overprovisioning and Overspending. You’re left confused because this is the exact opposite result you thought cloud migration would have. Additionally, if you find yourself in this predicament, you have difficulty pinpointing areas for improvement to initiate corrective action. 

Enter Innovation Scoring by 2nd Watch. Our data-driven scoring system will help you assess your applications running in the cloud environment and identify where you are overprovisioning and overspending. Innovation Scoring is the first step to establishing cloud economics and maximizing the value of cloud computing to your business in the long run.

 

The Importance of Cloud Economics

If O2 is how you define your cloud environment, you’ve learned the hard way about the need for cloud economics. While cost savings is a component of cloud economics, the ultimate goal of the practice is to maximize the value of cloud computing for your organization. Implementing cloud economics will give your business insights into which departments are utilizing the cloud, what applications and workloads are using the cloud, and how these moving parts contribute to more impactful and cost effective business goals. 

Without cloud economics, your business will deal with overrun cloud budgets, which are usually due to one or more of the following:

  • Ungoverned costs: your organization has no idea what it is spending on.
  • Unforecasted usage: you see more cloud projects than you had anticipated.
  • Uncommitted mindset: you don’t want to commit to a cloud contract (because you can’t predict its usage), so you miss out on contractual discounts.
  • Wasted dev/test resources: your dev team is overprovisioning their infrastructure.
  • Overestimated production headroom: you are not auto-scaling or have not set proper parameters for autoscaling for your applications.
  • Wrongsized production: your production environment is overprovisioned, and pay for the excess resources monthly. 
  • Poor design and implementation: your architects make suboptimal design choices for cloud solutions because they are unaware of the costs to the business. 

For cloud economics to work, there must be a company-wide commitment to the practice beyond simply calculating cloud costs. Just like implementing a DevOps practice, impactful cloud economics requires promoting a cross-functional and collaborative culture. Business leaders must encourage transparency and trackability to enable teams to work together harmoniously to manage their cloud infrastructure and prove the true business benefits of the cloud. 

 

2nd Watch’s Innovation Scoring

Cloud economics is critical for your business to reap the maximum benefits of cloud computing. However, cloud economics is a pervasive cultural practice, so it won’t happen at the snap of your fingers. It will require time and effort for your business to establish cloud economics. 

The first step in controlling your cloud budget and governing your cloud platform is to identify areas of improvement. 2nd Watch created the Innovation Scoring system, our proprietary scoring methodology, to help you identify opportunities for optimization and modernization in a data-driven way. 

Our Innovation Scoring methodology will reveal the underlying problem with your cloud management. We’ll be able to identify the application needing improvement and determine why it is suboptimal. Did you set it up incorrectly and need to move to PaaS with autoscaling capabilities? Or did someone write your application in 2005, and you are in dire need of application modernization? Or is it a combination of both? 2nd Watch designed its Innovation Scoring to pinpoint areas for improvement in your database, infrastructure, and/or application. When we ascertain the source of inefficiency, we can address issues contributing to cloud sprawl and skyrocketing cloud costs. 

To calculate your Innovation Score, we analyze several different dynamics related to your cloud applications. The ratings from each category are then cross-tabulated to generate a total view of your entire cloud environment. Your Innovation Score will not only reveal inefficiencies but also allow us to compare your efforts against other similarly sized companies and make sure you are up to industry standards. 

2nd Watch understands that cloud economics is a cultural undertaking; therefore, when we assign Innovation Scores to our clients, we do so in a way that encourages company-wide participation. To promote engagement and commitment, we’ve gamified our Innovation Scoring: we split our clients’ technical leadership into teams and calculate each team’s score. When we check in with our clients, we reveal each team’s score to showcase which ones are being innovative and taking advantage of the cloud and which ones have room for improvement. 

 

Sample Innovation Scoring Output

 

Our approach to Innovation Scoring promotes friendly competition, which fosters collaboration between teams and a transparent high-level overview of how each team is leveraging the cloud. When our clients are a part of our Innovation Scoring system, it jumpstarts a culture of innovation, transparency, and accountability within their business. 

 

Conclusion

Consider the importance of cloud economics when planning to run your applications in a cloud environment. It is easy to overspend, get overwhelmed, and have no sense of direction. Therefore, cloud economics is beneficial whether you implement it proactively or reactively.

2nd Watch’s Innovation Scoring is a practical first step to getting your cloud budget in order and establishing cloud economics as a standard cultural practice in your organization. Through data and analysis, our Innovation Scoring will help you identify how you can optimize your cloud instance so that you are receiving maximum cloud value for your business. Moreover, Innovation Scoring trains your teams to be communicative and cross-collaborative, which are the traits your company culture needs to succeed in cloud economics.

2nd Watch takes a holistic approach to cloud cost optimization and cloud economics. Contact us, and we’ll show you where and how you can improve your cloud-based applications with our Innovation Scoring.


Manufacturing Analytics: The Power of Data in the Manufacturing Industry

The effects of the pandemic have hit the manufacturing industry in ways no one could have predicted. During the last 18 months, a new term has come up frequently in the news and in conversation: the supply chain crisis. Manufacturers have been disrupted in almost every facet of their business, and they have been put to the test as to whether they can weather these challenges or not. 

 

Manufacturing businesses that began a digital transformation prior to the current global crisis have been more agile in handling the disruptions. That is because manufacturers using data analytics and cloud technology can be flexible in adopting the capabilities they need for important business goals, be able to identify inefficiencies more quickly and be equipped to adopt a hybrid workforce to make sure production doesn’t stall. 

The pandemic has identified and accelerated the need for manufacturers to digitize and harness the power of modern technology. Real-time data and analytics are fundamental to the manufacturing industry because they create the contextual awareness that is crucial for optimizing products and processes. This is especially important during the supply chain crisis, but this goes beyond the scope of the pandemic. Manufacturers will want to, despite the external circumstances, automate for quicker and smarter decisions in order to remain competitive and have a positive impact on the bottom line. 

In this article, we’ll identify the use cases and benefits of manufacturing analytics, which can be applied in any situation at any time. 

What is Manufacturing Analytics?

Manufacturing analytics is used to capture, process, and analyze machine, operational, and system data in order to manage and optimize production. It is used in critical functions – such as planning, quality, and maintenance – because it has the ability to predict future use, avoid failures, forecast maintenance requirements, and identify other areas for improvement. 

To improve efficiency and remain competitive in today’s market, manufacturing companies need to undergo a digital transformation to change the way their data is collected. Traditionally, manufacturers capture data in a fragmented manner: their staff manually check and record factors, fill forms, and note operation and maintenance histories for machines on the floor. These practices are susceptible to human error, and as a result, risk being highly inaccurate. Moreover, these manual processes are extremely time-consuming and open to biases. 

Manufacturing analytics solves these common issues. It collects data from connected devices, which reduces the need for manual data collection and, thereby, cuts down the labor associated with traditional documentation tasks. Additionally, its computational power removes the potential errors and biases that traditional methods are prone to. 

Because manufacturing equipment collects massive volumes of data via sensors and edge devices, the most efficient and effective way to process this data is to feed the data to a cloud-based manufacturing analytics platform. Without the power of cloud computing, manufacturers are generating huge amounts of data, but losing out on potential intelligence they have gathered. 

Cloud-based services provide a significant opportunity for manufacturers to maximize their data collection. The cloud provides manufacturers access to more affordable computational power and more advanced analytics. This enables manufacturing organizations to gather information from multiple sources, utilize machine learning models, and ultimately discover new methods to optimize their processes from beginning to end. 

Additionally, manufacturing analytics uses advanced models and algorithms to generate insights that are near-real-time and much more actionable. Manufacturing analytics powered by automated machine data collection unlocks powerful use cases for manufacturers that range from monitoring and diagnosis to predictive maintenance and process automation. 

Use Cases for Cloud-Based Manufacturing Analytics

The ultimate goal of cloud-based analytics is to transition from having descriptive to predictive practices. Rather than just simply collecting data, manufacturers want to be able to leverage their data in near-real-time to get ahead of issues with equipment and processes and to reduce costs. Below are some business use cases for automated manufacturing analytics and how they help enterprises achieve predictive power:

Demand Forecasting and Inventory Management

Manufacturers need to have complete control of their supply chain in order to better manage inventory. However, demand planning is complex. Manufacturing analytics makes this process simpler by providing near-real-time floor data to support supply chain control, which leads to improved purchase management, inventory control, and transportation. The data provides insight into the time and costs needed to build parts and run a given job, which gives manufacturers the power to more accurately estimate their needs for material to improve planning. 

Managing Supply Chains

For end-to-end visibility in the supply chain, data can be captured from materials in transit and sent straight from external vendor equipment to the manufacturing analytics platform. Manufacturers can then manage their supply chains from a central hub of data collection that organizes and distributes the data to all stakeholders. This enables manufacturing companies to direct and redirect resources to speed up or down. 

Price Optimization

In order to optimize pricing strategies and create accurate cost models, manufacturers need exact timelines and costs. Having an advanced manufacturing analytics platform can help manufacturers determine accurate cycle times to ensure prices are appropriately set. 

Product Development

To remain competitive, manufacturing organizations must invest in research and development (R&D) to build new product lines, improve existing models, and introduce new services. Manufacturing analytics makes it possible for this process to be simulated, rather than using traditional iterative modeling. This reduces R&D costs greatly because real-life conditions can be replicated virtually to predict performance. 

Robotization

Manufacturers are relying more on robotics. As these robots become more intelligent and independent, the data they collect while they execute their duties will increase. This valuable data can be used within a cloud-based manufacturing analytics platform to really control quality at the micro-level. 

Computer Vision Applications

Modern automated quality control harnesses advanced optical devices. These devices can collect information via temperature, optics, and other advanced vision applications (like thermal detection) to precisely control stops.

Fault Prediction and Preventative Maintenance

Using near-real-time data, manufacturers can predict the likelihood of a breakdown – and when it may happen – with confidence. This is much more effective than traditional preventive maintenance programs that are use-based or time-based. Manufacturing analytics’s accuracy to predict when and how a machine will break down allows technicians to perform optimal repairs that reduce overall downtime and increase productivity. 

Warranty Analysis

It’s important to analyze information from failed products to understand how products are withstanding the test of time. With manufacturing analytics, products can be improved or changed to reduce failure and therefore costs. Collecting warranty data can also shed light on the use (and misuse) of products, increase product safety, improve repair procedures, reduce repair times, and improve warranty service. 

Benefits of Manufacturing Analytics

In short, cloud-based manufacturing analytics provides awareness and learnings on a near-real-time basis. For manufacturers to be competitive, contextual awareness is crucial for optimizing product development, quality, and costs. Production equipment generates huge volumes of data, and manufacturing analytics allows manufacturers to leverage this data stream to improve productivity and profitability. Here are the tangible benefits and results of implementing manufacturing analytics:

Full Transparency and Understanding of the Supply Chain

In today’s environment, owning the supply chain has never been more critical. Data analytics can help mitigate the challenges that have cropped up with the current supply chain crisis. For manufacturing businesses, this means having the right number of resources. Data analytics allows manufacturers to remain as lean as possible, which is especially important in today’s global climate. Organizations need to use data analytics to ensure they have the right amount of material and optimize their supply chains during a time when resources are scarce and things are uncertain. 

Reduced Costs

Manufacturing analytics reveals insights that can be used to optimize processes, which leads to cost savings. Predictive maintenance programs decrease downtime and manage parts inventories more intelligently, limiting costs and increasing productivity. Robotics and machine learning reduce labor and the associated costs. 

Increased Revenue

Manufacturers must be dynamic in responding to demand fluctuations. Near-real-time manufacturing analytics allows companies to be responsive to ever-changing demands. At any given time, manufacturing companies have up-to-date insights into inventory, product, and supply chains, allowing them to adjust to demand accordingly in order to maintain delivery times. 

Improved Efficiency Across the Board

The amount of information that product equipment collects enables manufacturers to increase efficiency in a variety of ways. This includes reducing energy consumption, mitigating compliance errors, and controlling the supply chain. 

Greater Customer Satisfaction

At the end of the day, it is important to know what customers want. Data analytics is a crucial tool in collecting data from customer feedback, which can be applied to streamlining the process per the customer’s requirements. Manufacturers can analyze the data collected to determine how to personalize services for their consumers, thereby, increasing customer satisfaction. 

Conclusion

The effects of COVID-19 have shaken up the manufacturing industry. Because of the pandemic’s disruptions, manufacturers are realizing the importance of robust tools – like cloud computing and data analytics – to remain agile, lean, and flexible regardless of external challenges. The benefits that organizations can reap from these technologies go far beyond the horizon of the current supply chain crisis. Leading manufacturers are using data from systems across the organization to increase efficiency, drive innovation, and improve overall performance in any environment.

2nd Watch’s experience managing and optimizing data means we understand industry-specific data and systems. Our manufacturing data analytics solutions and consultants can assist you in building and implementing a strategy that will help your organization modernize, innovate, and outperform the competition. Learn more about our manufacturing solutions and how we can help you gain deep insight into your manufacturing data!


Cloud Migration Challenges: 6 Reasons the Cloud Might Not be What You Think it Is

A lot of enterprises migrate to the public cloud because they see everyone else doing it. And while you should stay up on the latest and greatest innovations – which often happen in the cloud – you need to be aware of the realities of the cloud and understand different cloud migration strategies. You need to know why you’re moving to the cloud. What’s your goal? And what outcomes are you seeking? Make sure you know what you’re getting your enterprise into before moving forward in your cloud journey.

1. Cloud technology is not a project, it’s a constant

Be aware that while there is a starting point to becoming more cloud native – the migration – there is no stopping point. The migration occurs, but the transformation, development, innovation, and optimization is never over.

There are endless applications and tools to consider, your organization will evolve over time, technology changes regularly, and user preferences change even faster. Fueled by your new operating system, cloud computing puts you into continuous motion. While continuous motion is positive for outcomes, you need to be ready to ride the wave regardless of where it goes. Once you get on, success requires that you stay there.

2. Flex-agility is necessary to survival

Flexibility + agility = flex-agility, and you need it in the cloud. Flex-agility enables enterprises to adapt to the risks and unknowns occurring in the world. The pandemic continues to highlight the need for flex-agility in business. Organizations further along in their cloud journeys were able to quickly establish remote workforces, adjust customer interactions, communicate completely and effectively, and ultimately, continue running. While the pandemic was unprecedented, more commonly, flex-agility is necessary in natural disasters like floods, hurricanes, and tornadoes; after a ransomware or phishing attack; or when an employee’s device is lost, stolen, or destroyed.

3. You still have to move faster than the competition

Gaining or maintaining your competitive edge in the cloud has a lot to do with speed. Whether it’s the dog-eat-dog nature of your industry, macroeconomics, or a political environment, these are the things that speed up innovation. You might not have any control over these things, but they’re shaping the way consumers interact with brands. Again, when you think about how the digital transformation evolved during the pandemic, you saw winning business move the fastest. The cloud is an amazing opportunity to meet all the demands of your environment, but if you’re not looking forward, forecasting trends, and moving faster than the competition, you could fall behind.

4. People are riskier than technology

In many ways, the technology is the easiest part of an enterprise cloud strategy. It’s the people where a lot of risk comes into play. You may have a great strategy with clean processes and tactics, but if the execution is poor, the business can’t succeed. A recent survey revealed that 85% of organizations report deficits in cloud expertise, with the top three areas being cloud platforms, cloud native engineering, and security. While business owners acknowledge the importance of these skills, they’re still struggling to attract the caliber of talent necessary.

In addition to partnering with cloud service experts to ensure a capable team, organizations are also reinventing their technical culture to work more like a startup. This can incentivize the cloud-capable with hybrid work environments, an emphasis on collaboration, use of the agile framework, and fostering innovation.

5. Cost-savings is not the best reason to migrate to the cloud

Buy-in from executives is key for any enterprise transitioning to the cloud. Budget and resources are necessary to continue moving forward, but the business value of a cloud transformation isn’t cost savings. Really, it’s about repurposing dollars to achieve other things. At the end of the day, companies are focused on getting customers, keeping customers, and growing customers, and that’s what the cloud helps to support.

By innovating products and services in a cloud environment, an organization is able to give customers new experiences, sell them new things, and delight them with helpful customer service and a solid user experience. The cloud isn’t a cost center, it’s a business enabler, and that’s what leadership needs to hear.

6. Cloud migration isn’t always the right answer

Many enterprises believe that the process of moving to the cloud will solve all of their problems. Unfortunately, the cloud is just the most popular technology operating system platform today. Sure, it can help you reach your goals with easy-to-use functionality, automated tools, and modern business solutions, but it takes effort to utilize and apply those resources for success.

For most organizations, moving to the cloud is the right answer, but it could be the wrong time. The organization might not know how it wants to utilize cloud functionality. Maybe outcomes haven’t been identified yet, the business strategy doesn’t have buy-in from leadership, or technicians aren’t aware of the potential opportunities. Another issue stalling cloud migration is internal cloud-based expertise. If your technicians aren’t cloud savvy enough to handle all the moving parts, bring on a collaborative cloud advisor to ensure success.

Ready for the next step in your cloud journey?

Cloud Advisory Services at 2nd Watch provide you with the cloud solution experts necessary to reduce complexity and provide impartial guidance throughout migration, implementation, and adoption. Whether you’re just curious about the cloud, or you’re already there, our advanced capabilities support everything from platform selection and cost modeling, to app classification, and migrating workloads from your on-premises data center. Contact us to learn more!

Lisa Culbert, Marketing


Top 10 Cloud Optimization Best Practices

Cloud optimization is a continuous process specific to a company’s goals, but there are some staple best practices all optimization projects should follow. Here are our top 10.

1. Begin with the end in mind

Business leaders and stakeholders throughout the organization should know exactly what they’re trying to achieve with a cloud optimization project. Additionally, this goal should be revisited on a regular basis to make sure you remain on track to achievement. Create measures to gauge success at different points and follow the agreed upon order of operations to complete the process.

2. Create structure around governance and responsibility

Overprovisioning is one of the most common issues adding unnecessary costs to your bottom line. Implement specific and regulated structure around governance and responsibility for all teams involved in optimization to control any unnecessary provisioning. Check in regularly to make sure teams are following the structure and you only have the tools you need and are actively using.

3. Get all the Data you Need

Cloud optimization is a data-driven exercise. To be successful, you need insight into a range of data pieces. Not only do you need to identify what data you need and be able to get it, but you also need to know what data you’re missing and figure out how to get it. Collaborate with internal teams to make sure essential data isn’t siloed or already being collected. Additionally, regularly clean and validate data to ensure reliability for data-based decision making.

4. Implement Tagging Practices

To best utilize the data you have, organizing and maintaining it with strict tagging practices in necessary. Implement a system that works from more than just a technical standpoint. You can also use tagging to launch instances, control your auto parking methodology, or in scheduling. Tagging helps you understand the data and see what is driving spend. Whether it’s an environment tag, owner tag, or application tag, tagging provides clarity into spend, which is the         key to optimization.

5. Gain Visibility into Spend

Tagging is one way to see where your spend is going, but it’s not the only way required. Manage accounts regularly to make sure inactive accounts aren’t continuing to be billed. Set up an internal mechanism to review with your app teams and hold them accountable. It can be as simple as a dashboard with tagging grading, as long as it lets the data speak for itself.

6. Hire the Right Technical Expertise

Get more out of your optimization with the right technical expertise on your internal team. Savvy technicians should work alongside the business teams to drive the goals of optimization throughout the process. Without collaboration between these departments, you risk moving in differing directions with multiple end goals in mind. For example, one team might be acting with performance or a technical aspect in mind without realizing the implication on optimization. Partnering with optimization experts can also keep teams aligned and moving toward the same goal.

7. Select the Right Tools and Stick with Them

Tools are a part of the optimization process, but they can’t solve problems alone. Additionally, there are an abundance of tools to choose from, many of which have similar functionality and outcomes. Find the right tools for your goals, facilitate adoption, and give them the time and data necessary to produce results. Don’t get distracted by every new, shiny tool available and the “tool champions” fighting for one over another. Avoid the costs of overprovisioning by checking usage regularly and maintaining the governance structure established throughout your teams.

8. Make sure your Tools are Working.

Never assume a tool or a process you’ve put in place is working. In fact, it’s better to assume it’s not working and consistently check its efficiency. This regular practice of confirming the tools you have are both useful and being used will help you avoid overprovisioning and unnecessary spending. For tools to be effective and serve their purpose, you need enough visibility to determine how the tool is contributing to your overall end goal.

9. Empower Someone to Drive the Process.

The number one call to action for anyone diving into optimization is to appoint a leader. Without someone specific, qualified, and active in managing the project with each stakeholder and team involved, you won’t accomplish your goals. Empower this leader internally to gain the respect and attention necessary for employees to understand the importance of continuous optimization and contribute on their part.

10. Partner with Experts.

Finding the right partner to help you optimize efficiently and effectively will make the process easier at every turn. Bringing in an external driver who has the know-how and experience to consult on strategy through implementation, management, and replication is a smart move with fast results.

2nd Watch takes a holistic approach to cloud optimization with a team of experienced data scientists and architects who help you maximize performance and returns on your cloud assets. Are you ready to start saving? Let us help you define your optimization strategy to meet your business needs and maximize your results. Contact Us to take the next step in your cloud journey.

-Willy Sennott, Optimization Practice Manager


Steps to Continuous Cloud Optimization

Cloud optimization is an ongoing task for any organization driven by data. If you don’t believe you need to optimize, or you’re already optimized, you may not have the data necessary to see where you’re over-provisioned and losing spend. Revisit the optimization pillars frequently to best evolve with and take advantage of everything the cloud has to offer.

Begin with the end in mind

The big question is, where are you trying to go? This question should constantly be revisited with internal stakeholders and business leaders. Define the process that will get you there and follow the order of operations identified to reach your optimization goal. Losing sight of the purpose, getting caught up in shiny new tools, or failing to incorporate the right teams could lead you off path.

Empower someone to drive the process

This is pivotal because without this appointed person, cloud optimization will not happen. Give someone the power to drive optimization policies throughout the organization. Companies most successful in achieving optimization have a good internal mandate to make it a priority. When messages come from the top, and are enforced through a project champion, people tend to pay attention and management is much more effective.

Fill the data gaps

Cloud optimization is a data driven exercise, so you need all the data you can get to make it valuable. Your tools will be much more compelling when they have the data necessary to make smart recommendations. Understand where to get the data in your organization, and figure out how to get any data you don’t have. Verify your data regularly to confirm accuracy for intelligent decision making geared toward optimization.

Implement tagging practices

The practice of not only implementing, but also actively enforcing your tagging policies, drives optimization. Be it an environment tag, owner tag, or application tag, tags help you understand your data and what or who is driving spend.

Enforce accountability

While lack of tagging and data gaps prevent visibility, overprovisioning is also an accountability issue. Just look at the hundred plus AWS services alone that show up on a bill for an organization that’s a long-time user. It’s not uncommon for 20-30% of the total to be attributed to services they never even knew existed at the time they migrated to the cloud.

Hold your app teams accountable with an internal mechanism that lets the data speak for itself. It can be as simple as a dashboard with tagging grading, because everybody understands those results.

Rearchitect and refactor

Migrating to the cloud via a lift and shift can be a valuable strategy for certain organizations. However, after a few months in the cloud, you need to intentionally move forward with the next steps. Reevaluating, refactoring and rearchitecting will occur multiple times along the way. Without them, you end up spending more money than necessary.

Continuous optimization is a must

Optimization is not a one and done project because the possibilities are constantly evolving. Almost every day, a new technology is introduced. Maybe it’s a new instance family or tool. A couple years ago it was containers, and before that it was serverless. Being aware of these new and improved technologies is key to maintaining continuous optimization.

Engage with an experienced partner

There are a lot of factors to consider, evaluate, and complete as part of your cloud optimization practice. To maximize your optimization efforts, you want someone experienced to guide your strategy.

One benefit to partnering with an optimization expert, like 2nd Watch, is that an external partner can diffuse the internal conflicts typically associated with optimization. So much of the process is navigating internal politics and red tape. A partner helps meld the multiple layers of your business with a holistic approach that ensures your cloud is running as efficiently as possible.

-Willy Sennott, Optimization Practice Manager


Cloud Optimization: Top 5 Challenges and Why Tools Can’t Solve Them

Optimizing your cloud is essential for maximizing budgets, centralizing business units, making informed decisions, and driving performance. Regardless of whether you’re already in the cloud or you’re just beginning to consider migrating, you need to be aware of the challenges to optimization in order to avoid or overcome them and reach your optimization goals.

1. Complexity

The most pervasive challenge of optimization in the cloud is the complexity of the task. Regardless of the cloud platform – AWS, Azure, Google Cloud, or a hybrid cloud strategy – the intricacies are constantly evolving and changing. Trying to stay on top of that as an individual business requires a good amount of time, resources, and effort. Adding new tools and processes to your cloud requires integration, stakeholder agreement, data mining, analysis, and maintenance. While the potential outcomes from optimization are business-changing, it’s an ongoing process with many moving parts.

2. Governance

Standardized governance frameworks bring decentralized business units and disparate stakeholders together to accomplish business-wide objectives. Shared responsibility, from central IT to individual app teams, prevents the costly consequences of overprovisioning.  While many organizations are knowingly overprovisioned, they can’t seem to solve the problem. Part of the issue is simply a lack of overall governance.

3. Data

Cloud optimization is a data driven exercise. If it’s not data driven, it’s not scalable. You need to maximize your data by knowing what data you have, where it is, and how to access it. Also important is knowing what data is missing. Many organizations believe they have complete metrics, but they’re not capturing and monitoring memory, which is a huge piece of the puzzle. In fact, memory is one of the most constrained points of data across organizations.

4. Visibility

Incredibly important within data discovery and data mapping is gaining visibility through tagging. Without an enforced and uniform tagging strategy as part of your governance structure, spend can increase without accounting for it. Tags provide insight into your cloud economics, letting you know who is spending what, what are they spending it on, and how much are they spending. It’s not uncommon to see larger organizations with a number of individual linked accounts and no one knows who they belong to. We’ve even found, after some digging, that the owners of those accounts haven’t been with the company for months! To get the cost saving benefits from cloud optimization, you need visibility throughout the process.

5. Technical expertise

You need a certain level of technical expertise and intuition to take advantage of all the ways you can optimize your cloud. Too often, techs aren’t necessarily thinking about optimization, but rather make decisions based on other performance or technical aspects. Without optimization at the forefront of these deterministic behaviors, the business drivers may not perform as expected. Partner with data scientists and architects to map connections between data, workloads, resources, financial mechanisms, and your cloud optimization goals.

Tools are part of the solution, but not the entire solution.

While tools can help with your cloud optimization process, they can’t solve these common challenges alone. Tools just don’t have the capability to solve your data gaps. In fact, one foundational issue with tools is the specific algorithms used to generate recommendations. Regardless of whether or not the tool has complete data, it will still make the same recommendations, thereby creating confusion and introducing new risks.

It takes work to get the best results. Someone has to first be able to deduce the information provided by your tools, then put it into context for the various decision makers and stakeholders, and finally, your application owners and businesses teams have to architect the optimization correctly to be able to take advantage of the savings.

In choosing the right tools to aid your optimization, be aware of ‘tool champions’ who create internal noise around decision making. New tools are launched almost daily, and different stakeholders are going to champion different tools.

Once you find a tool, stick with it. Give it a chance to fully integrate with your cloud, provide training, and encourage adoption for best results. The longer it’s a part of your infrastructure, the more it will be able to aid in optimization.

2nd Watch takes a holistic approach to cloud optimization from strategy and planning, to cost optimization, forecasting, modeling and analytics. Download our eBook to learn more about adopting a holistic approach to cloud cost optimization.

-Willy Sennott, Optimization Practice Manager


Cloud Crunch Podcast: 5 Strategic IT Business Drivers CXOs are Contemplating Now

What is the new normal for life and business after COVID-19, and how does that impact IT? We dive into the 5 strategic IT business drivers CXOs are contemplating now and the motivation behind those drivers. Read the corresponding blog article at https://www.2ndwatch.com/blog/five-strategic-business-drivers-cxos-contemplating-now/. We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.


Optimizing your environment using AWS Savings Plans

Surprisingly, AWS has very quietly released a major enhancement/overhaul to purchasing compute resources up front. To date, purchasing Reserved Instances (Standard or Convertible) has offered AWS users great savings for their static workloads. This works because static workloads tend to utilize a set number of resources and RIs are paid for in advance, thereby justifying the financial commitment.

That said, how often do today’s business needs remain constant, particularly with today’s product development? So, until now, you had two choices if you couldn’t use your RIs: take the loss and let the RI term run out or undertake the hassle of selling it on the marketplace (potentially for a loss). AWS Savings Plans, on the other hand, provide a gigantic leap forward in solving this problem. In fact, you will find that these AWS Savings Plans provide far more flexibility and return for your investment than the standard RI model.

Here is the gist of the AWS Savings Plans program, taken from the AWS site:

AWS Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate usage.

AWS Savings Plans offer significant savings over On Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one- or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer. (Jeff Barr, AWS, 11.06.2019)

This is HUGE for AWS clients, because now, for the first time ever, savings can also be applied to workloads that leverage serverless containers—as well as traditional EC2 instances!

Currently there are two AWS Savings Plans, and here’s how they compare:

EC2 Instance Savings Plan Compute Savings Plan
Offers discount levels up to 72% off on-demand rates (same as RIs). Offers discount levels up to 66% off on-demand rates (the same rate as Convertible RIs).
Any changes in instances are restricted to the same AWS region. Spans regions. This could be a huge draw for companies with need for regional or national coverage.
Restricts EC2 instance types to the same family, but allows change in instance size and OS (e.g., t3.medium to t3.2xlarge). More flexible. Does not limit EC2 instance families or OS, and therefore, you are no longer locked into a specific instance family at the moment of purchase, as you would be with a traditional RI.
EC2 instances only: Similar to convertible RIs, this plan allows you to increase instance size, with a new twist: you can also reduce instance size! Yes, this means you may no longer have to sell your unused RIs on the marketplace! Allows clients to mix-and-match AWS products, such as EC2 and Fargate; extremely beneficial for clients who use a range of environments for their workloads.
BOTTOM LINE: Slightly less flexible, but you garner a greater discount. BOTTOM LINE: More flexible, but with less of a discount.

As with standard RI purchases, understanding your workloads will be key to determining when to use AWS Savings Plans vs. standard RIs (RIs aren’t going anywhere, but we recommend that Savings Plans be used in place of RIs moving forward) vs. On-Demand (including analysis of potential savings from auto-parking, seasonality, elasticity, and so on).

Sound a bit overwhelming? Fear not! This is where 2nd Watch’s Cloud Optimization service excels! Enrollment starts with a full analysis of your organization’s usage, AWS environment, and any other requirements/restrictions your organization may have. The final result is a detailed report, expertly determined by our AWS-certified optimization engineers, with our savings findings and recommendations—customized just for you!

Due to the nature of AWS Savings Plans, they will bring the most immediate value to clients who are either new to AWS or don’t have any RI commitments currently on their account. This is due to the fact that AWS Savings Plans cannot, unfortunately, replace existing RI purchases. Whatever your goals, our optimization experts are ready to help you plan the most strategically efficient and cost effective “next step” of your cloud transformation.

And that’s just the beginning

If you think that AWS Savings Plans may benefit your new or existing AWS deployment, contact us to jumpstart an analysis.

-Jeff Collins, Cloud Optimization Product Management


5 Steps to Cloud Cost Optimization: Hurdles to Optimization are Organizational, Not Technical

[et_pb_section bb_built=”1″ admin_label=”section” _builder_version=”3.22.3″][et_pb_row admin_label=”row” _builder_version=”3.22.3″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.0.74″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”]

In my last blog post, I covered the basics of cloud cost optimization using the Six Pillars model, and focused on the ‘hows’ of optimization and the ‘whys’ of its importance. In this blog, I’d like to talk about what comes next: preparing your organization for your optimization project. The main reason most clients delay and/or avoid confronting issues regarding cloud optimization is because it’s incredibly complex. Challenges from cloud sprawl to misaligned corporate priorities can cause a project to come to a screeching halt. Understanding the challenges before you begin is essential to getting off on the right foot.

 

5 Main Cloud Cost Optimization Challenges

Here are the 5 main challenges we’ve seen when implementing a cloud cost optimization project:

  • Cloud sprawl refers to the unrestricted, unregulated creation and use of cloud resources; cloud cost sprawl, therefore, refers to the costs incurred related to the use of each and every cloud resource (i.e., storage, instances, data transfer, etc.). This typically presents as decentralized account or subscription management.
  • Billing complexity, in this case, specifically refers to the ever-changing and variable billing practices of cloud providers and the invoices they provide you. Considering all possible variable configurations when creating many solutions across an organization, Amazon Web Services (AWS) alone has 500,000 plus SKUs you could see on any single invoice. If you cannot make sense of your bill up front, your cost optimization efforts will languish.
  • Lack of Access to Data and Application Metrics is one of the biggest barriers to entry. Cost optimization is a data driven exercise. Without billing data and application metrics over time, many incorrect assumptions end up being made resulting in higher cost.
  • Misaligned policies and methods can be the obstacle that will make or break your optimization project. When every team, organization or department has their own method for managing cloud resources and spend, the solution becomes more organizational change and less technology implementation. This can be difficult to get a handle on, especially if the teams aren’t on the same page with needing to optimize.
  • A lack of incentives may seem surprising to many, after all who doesn’t want to save money, however it is the number one blocker in large enterprises that we have experienced toward achieving optimization end goals. Central IT is laser focused on cost management and application/business units are focused more on speed and innovation. Both goals are important, but without the right incentives, process, and communication this fails every time. Building executive support to directly reapply realized optimization savings back to the business units to increase their application and innovation budgets is the only way to bridge misaligned priorities and build the foundation for lasting optimization motivation.

According to many cloud software vendors, waste accounts for 30% to 40% of all cloud usage. In the RightScale State of the Cloud Report 2019, a survey revealed that 35% of cloud spend is wasted. 2nd Watch has found that within large enterprise companies, there can be up to 70% savings through a combination of software and services.  It often starts by just implementing a solid cost optimization methodology.

When working on a project for cloud cost optimization, it’s essential to first get the key stakeholders of an organization to agree to the benefits of optimizing your cloud spend. Once the executive team is onboard and an owner is assigned, the path to optimization is clear covering each of the 6 Pillars of Optimization.

Path to Cloud Optimization

Step One: Scope and Objectives

As with any project, you first want to identify the goals and scope and then uncover the current state environment. Here are a few questions to ask to scope out your work:

  • Overall Project Goal – Are you focused on cost savings, workload optimization, uptime, performance or a combination of these factors?
  • Budget – Do you want to sync to a fiscal budget? What is the cycle? What budget do you have for upfront payments? Do you budget at an account level or organization level?
  • Current State – What number of instances and accounts do you have? What types of agreements do you have with your cloud provider(s)?
  • Growth – Do you grow seasonally, or do you have planned growth based on projects? Do you anticipate existing workloads to grow or shrink overtime?
  • Measurement – How do you currently view your cloud bill? Do you have detailed billing enabled? Do you have performance metrics over time for your applications?
  • Support – Do you have owners for each application? Are people available to assess each app? Are you able to shutdown apps during off hours? Do you have resources to modernize applications?

Step Two: Data Access

One of the big barriers to a true optimization is gaining access to data. In order to gather the data (step 3) you first need to get the team onboard to grant you or the optimization project team access to the information.

During this step, get your cross-functional team excited about the project, share the goals and current state info you gathered in the previous step and present your strategy to all your stakeholders.

Stakeholders may include application owners, cloud account owners, IT Ops, IT security and/or developers who will have to make changes to applications.

Remember, data is key here, so find the people who own the data. Those who are monitoring applications or own the accounts are the typical stakeholders to involve. Then share with them the goals and bring them along this journey.

Step Three: Data Management

Data is grouped into a few buckets:

  1. Billing Data – Get a clear view of your cloud bill over time.
  2. Metrics Data – CPU, I/O, Bandwidth and Memory for each application over time is essential.
  3. Application Data – Conduct interviews of application owners to understand the nuances. Graph out risk tolerance, growth potential, budget constraints and identify the current tagging strategy.

A month’s worth of data is good, though three months of data is much better to understand the capacity variances for applications and how to project into the future.

Step Four: – Visualize and Assess Data Usage

This step takes a bit of skill. There are tools like CloudHealth that can help you understand your cost and usage in cloud. Then there are other tools that can help you understand your application performance over time. Using the data from each of these sources and collaborating them across the pillars of optimization is essential to understanding where you can find the optimal cost savings.

I often recommend bringing in an optimization expert for this step. Someone with a data science, cloud and accounting background can help you visualize data and find the best options for optimization.

Step Five: Remediation Plan

Now that you know where you can save, take that information and build out a remediation plan. This should include addressing workloads in one or more of the pillars.

For example, you may shut down resources at night for an application and move it to another family of instances/VMs based on current pricing.

Your remediation should include changes by application as well as:

  1. RI Purchase Strategy across the business on a 1 or 3-year plan.
  2. Auto-Parking Implementation to part your resources when they’re not in use.
  3. Right-Sizing based on CPU, memory, I/O.
  4. Family Refresh or movement to the newer, more cost-effective instance families or VM-series.
  5. Elimination of Waste like unutilized instances, unattached volumes, idle load balancers, etc.
  6. Storage reassessment based on size, data transfer, retrieval time and number of retrieval requests.
  7. Tagging Strategy to track each instance/VM and track it back to the right resources.
  8. IT Chargeback process and systems to manage the process.

Remediation can take anywhere from one month to a year’s time based on organization size and the support of application teams to make necessary changes.

Download our ‘5 Steps to Cloud Cost Optimization’ infographic for a summary of this process.

End Result

With as much as 70% savings possible after implementing one of these projects, you can see the compelling reason to start. A big part of the benefits is organizational and long lasting including:

  • Visibility to make the right cloud spending decisions​
  • Break-down of your cloud costs by business area for chargeback or showback​
  • Control of cloud costs while maintaining or increasing application performance​
  • Improved organizational standards to keep optimizing costs over time​
  • Identification of short and long-term cost savings across the various optimization pillars:

Many companies reallocate the savings to innovative projects to help their company grow. The outcome of a well-managed cloud cost optimization project can propel your organization into a focus on cloud-native architecture and application refactoring.

Though complex, cloud cost optimization is an achievable goal. By cross-referencing the 6 pillars of optimization with your organizations policies, applications and teams, you can quickly find savings from 30 – 40% and grow from there.

By addressing project risks like lack of awareness, decentralized account management, lack of access to data and metrics, and lack of clear goals, your team can quickly achieve savings.

Ready to get started with your cloud cost optimization? Schedule a Cloud Cost Optimization Discovery Session for a free 2-hour session with our team of experts.

-Stefana Muller, Sr Product Manager

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]


The 6 Pillars of Cloud Cost Optimization

Let me start by painting the picture: You’re the CFO. Or the manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources. Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.

Except you.

The promise of moving to cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.

Source: Amazon Web Services (AWS). “Understanding Consolidated Bills – AWS Billing and Cost Management”. (2017). Retrieved from https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/con-bill-blended-rates.html

From Reserved Instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimize your spend.

It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.

Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimization. This where 2nd Watch and our Six Pillars of Cloud Optimization come in.

The Six Pillars of Cloud Cost Optimization

1. Reserved Instances (RIs)

AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.

Most cloud cost optimizations, erroneously, begin and end here—providing you and your organization with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through 3rd party optimization tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.

Two of the major factors to consider with cloud cost optimization are Risk Tolerance and Centralized RI Management portfolios.

  • Risk Tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organization take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings.
  • Centralized RI Management portfolios allow for deeper RI coverage across organizational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralized, whole organization approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program.

Once you identify your risk tolerance and centralize your approach to RI’s you can take advantage of this optimization option. Though, an RI-only optimization strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the 5 other optimization pillars to achieve the most effective cloud cost optimization.

2. Auto-Parking

One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.

3. Right-Sizing

Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyze resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.

Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.

Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimization pillars?

4. Family Refresh

Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.

Up-to-date knowledge of the instance types/families being used within your organization is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.

Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimization. As a result, for many organizations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimization service offering.

5. Waste

Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilized instances. Some examples of waste in the cloud include:

  • AWS RDSs or Azure SQL DBs without a connection
  • Unutilized AWS EC2s
  • Azure VMs that were spun up for training or testing
  • Dated snapshots that are holding storage space that will never be useful
  • Idle load balancers
  • Unattached volumes

Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organizational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.

6. Storage

Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:

  • Size – How much storage do you need?
  • Data Transfer (bandwidth) – How often does your data need to move from one location to another?
  • Retrieval Time – How quickly do you need to access your data?
  • Retrieval Requests – How often do you need to access your data?

There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.

So, you see, each of these six pillars of cloud cost optimization houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimizing only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimization process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.

The good news is that the future is bright! Once you have completely assessed your current environment, taken the pillars into account, made the changes required to optimize your cloud, and found a method by which to make this process continuous, you can investigate optimization through application refactoring, ephemeral instances, spot instances and serverless architecture.

The promised cost savings of public cloud is reachable, if only you know where to look.

2nd Watch offers a Cloud Cost Optimization service that can help guide you through this process. Our Cloud Cost Optimization service is guaranteed to reduce your cloud computing costs by 20%,* increasing efficiency and performance. Our proven methodology empowers you to make data driven decisions in context, not relying on tools alone. Cloud cost optimization doesn’t have to be time consuming and challenging. Start your cloud cost optimization plan with our proven method for success at https://offers.2ndwatch.com/download-cloud-cost-optimization-datasheet

*To qualify for guaranteed 20% savings, must have at least $50,000/month cloud usage.

Stefana Muller, Sr. Product Manager