How to Maximize the Business Value of The Cloud Using Cloud Economics

Cloud economics is crucial for an organization to make the most out of their cloud solutions, and business leaders need to prioritize shifting their company culture to embrace accountability and trackability. 

When leaders hear the phrase “cloud economics,” they think about budgeting and controlling costs. Cost management is an element of cloud economics, but it is not the entire equation. In order for cloud economics to be implemented in a beneficial way, organizations must realize that cloud economics is not a budgetary practice, but rather an organizational culture shift.

The very definition of “economics” indicates that the study is more than just a numbers game. Economics is “a science concerned with the process or system by which goods and services are produced, sold, and bought.” The practice of economics involves a whole “process or system” where actors and actions are considered and accounted for. 

With this definition in mind, cloud economics means that companies are required to look at key players and behaviors when evaluating their cloud environment in order to maximize the business value of their cloud. 

Once an organization has fully embraced the study of cloud economics, it will be able to gain insight into which departments are utilizing the cloud, what applications and workloads are utilizing the cloud, and how all of these moving parts contribute to greater business goals. Embodying transparency and trackability enables teams to work together in a harmonious way to control their cloud infrastructure and prove the true business benefits of the cloud. 

If business leaders want to apply cloud economics to their organizations, they must go beyond calculating cloud costs. They will need to promote a culture of cross-functional collaboration and honest accountability. Leadership should prioritize and facilitate the joint efforts of cloud architects, cloud operations, developers, and the sourcing team. 

Cloud economics will encourage communication, collaboration, and change in culture, which will have the added benefit of cloud cost management and cloud business success. 

Where do companies lose control of their cloud costs?

When companies lose control of cloud costs, the business value of the cloud disappears as well. If the cloud is overspending and there is no business value to show for, how are leaders supposed to feel good about their cloud infrastructure? Going over budget with no benefits would not be a sound business case for any enterprise in any industry. 

Out-of-control cloud spending is quite easy, and it usually boils down to poor business decisions that come from leadership. Company leaders should first recognize that they wield the power to manage cloud costs and foster communication between teams. If they are making poor business decisions, like prioritizing speedy delivery over well-written code or not promoting transparency, then they are allowing practices that negatively impact cloud costs. 

When leaders push their teams to be fast rather than thorough, it creates technical debt and tension between teams. The following sub-optimal practices can happen when leadership is not prioritizing cloud cost optimizations:

  • Developers ignore seemingly small administrative tasks that are actually immensely important and consequential, like rightsizing infrastructure or turning off inactive applications. 
  • Architects select suboptimal designs that are easier and faster to run but are more expensive to implement.
  • Developers use inefficient code and crude algorithms in order to ship a feature faster, but then fail to consider performance optimizations to execute less resource consumption.
  • Developers forgo deployment automation that would help to automatically rightsize.
  • Developers build code that isn’t inherently cloud-native, and therefore not cloud-optimized.
  • Finance and procurement teams are only looking at the bottom line and don’t fully understand why the cloud bill is so high, therefore, creating tension between IT/dev and finance/procurement. 

When these actions compound, it leads to an infrastructure mess that is incredibly difficult to clean up. Poorly implemented bad designs that are not easily scalable will require a significant amount of development time; therefore, leaving companies with inefficient cloud infrastructure and preposterously high cloud costs.

Furthermore, these high and unexplained cloud bills cause rifts between teams and are detrimental to collaboration efforts. Lack of accountability and visibility causes developer and finance teams to have misaligned business objectives. 

Poor cloud governance and culture are derived from leadership’s misguided business decisions and muddled planning. If leaders don’t prioritize cloud cost optimization through cloud economics, the business value of the cloud is diminished and company collaboration will suffer. Developers and architects will continue to execute processes that create high cloud costs, and finance and procurement teams will forever be at odds with the IT team.

What are the benefits of cloud economics?

Below are a few common business pitfalls that leaders can easily address if they embrace the practice of cloud economics:

Decentralized Costs and Budgets

Knowing budgets may seem obvious, but more often than not, leaders don’t even know what they are spending on the cloud. This is usually due to siloed department budgets and a lack of disclosure. Cloud economics requires leaders to create visibility into their cloud spend and open channels of communication about allocation, budgeting, and forecasting.

Lack of Planning and Unanticipated Usage 

If organizations don’t plan, then they will end up over-utilizing the cloud. Failing to forecast or proactively budget cloud resources will lead to using too many unnecessary and/or unused resources. With cloud economics, leaders are responsible for strategies, systems, and internal communications to connect cloud costs with business goals. 

Non-Committal Mindset 

This issue is a culmination of other problems. If business leaders are unsure of what they are doing in the cloud, they are less willing to commit to long-term cloud contracts. Unwillingness to commit to contracts is a missed opportunity for business leaders because long-term engagements are more cost-friendly. Once leaders have implemented cloud economics to inspire confidence in their cloud infrastructure, they can assertively evaluate purchasing options in the most cost-effective way.

What are the steps to creating a culture around cloud economics?

Cloud economics is a study that goes beyond calculating and cutting costs. It is a company culture that is a cross-functional effort. Though it seems like a significant undertaking, the steps to get started are quite manageable. Below is a high-level plan that business leaders must take charge of to create a culture around prioritizing cloud economics:

#1. Inform

Stage one consists of lots of data collecting and understanding of the current cloud situation. Company leaders will need to know what the trust costs of the cloud are before they can proceed forward. Creating visibility around the current state is also the first step to creating a culture of communication and transparency amongst teams and stakeholders.

#2. Optimize

Once the baseline is understood, leadership can analyze the data in order to optimize cloud costs. The visibility of the current state is crucial for teams and leadership to understand what they are working with and how they can optimize it. This stage is where a lot of conversations happen amongst teams to come up with an optimization action plan. It requires teams and stakeholders to communicate and work together, which ultimately builds trust among each other.

#3. Operate

Finally, the data analysis and learnings can be implemented. With the optimization action plan, leaders should know what areas of the cloud demand optimization first and how to optimize these areas. At this point in the process, teams and stakeholders are comfortable with cross-collaboration and honest communications amongst each other. This opens up a transparent feedback loop that is necessary for continuous improvement. 

Conclusion

The entire organization stands to gain when cloud economics is prioritized. A cost-efficient cloud infrastructure will lead to improved productivity, cross-functional collaboration between teams, and focused efforts towards greater business objectives. 

When it comes to cloud economics and optimization, 2nd Watch is the go-to partner for enterprise-level services and support. Our team of experts and cloud-accredited professionals help businesses plan, analyze, and recommend strategies to create a culture of cloud economics and accountability. Control cloud costs and maximize the business value of your cloud today by contacting a 2nd Watch cloud expert.

Mary Fellows | Director of Cloud Economics at 2ND Watch


Innovation Scoring from 2nd Watch Boosts Cloud Optimization

Does this sound familiar? “You will move to the cloud, for right or wrong, because of a business imperative to get out of your data center, not tomorrow, but yesterday.” Or, “You’re sold on the idea that by migrating to the cloud, you’d be able to reduce your total cost of ownership (TCO), increase flexibility, and accelerate innovation projects.” The cloud practically sells itself, and as a result, you plan to ditch your legacy, on-premises technology and begin your cloud migration journey.

However, suppose you hop into the cloud without a defined strategy and approach. In that case, you’ll experience cloud sprawl, and spiraling cloud costs will negate the touted benefits of the cloud. This sort of “blind faith” in all the cloud offers is a common mistake many business leaders make. It has prevented you from considering cloud management and economics as part of your cloud migration strategy.  

Without cloud cost governance, your organization will suffer O2: Overprovisioning and Overspending. You’re left confused because this is the exact opposite result you thought cloud migration would have. Additionally, if you find yourself in this predicament, you have difficulty pinpointing areas for improvement to initiate corrective action. 

Enter Innovation Scoring by 2nd Watch. Our data-driven scoring system will help you assess your applications running in the cloud environment and identify where you are overprovisioning and overspending. Innovation Scoring is the first step to establishing cloud economics and maximizing the value of cloud computing to your business in the long run.

 

The Importance of Cloud Economics

If O2 is how you define your cloud environment, you’ve learned the hard way about the need for cloud economics. While cost savings is a component of cloud economics, the ultimate goal of the practice is to maximize the value of cloud computing for your organization. Implementing cloud economics will give your business insights into which departments are utilizing the cloud, what applications and workloads are using the cloud, and how these moving parts contribute to more impactful and cost effective business goals. 

Without cloud economics, your business will deal with overrun cloud budgets, which are usually due to one or more of the following:

  • Ungoverned costs: your organization has no idea what it is spending on.
  • Unforecasted usage: you see more cloud projects than you had anticipated.
  • Uncommitted mindset: you don’t want to commit to a cloud contract (because you can’t predict its usage), so you miss out on contractual discounts.
  • Wasted dev/test resources: your dev team is overprovisioning their infrastructure.
  • Overestimated production headroom: you are not auto-scaling or have not set proper parameters for autoscaling for your applications.
  • Wrongsized production: your production environment is overprovisioned, and pay for the excess resources monthly. 
  • Poor design and implementation: your architects make suboptimal design choices for cloud solutions because they are unaware of the costs to the business. 

For cloud economics to work, there must be a company-wide commitment to the practice beyond simply calculating cloud costs. Just like implementing a DevOps practice, impactful cloud economics requires promoting a cross-functional and collaborative culture. Business leaders must encourage transparency and trackability to enable teams to work together harmoniously to manage their cloud infrastructure and prove the true business benefits of the cloud. 

 

2nd Watch’s Innovation Scoring

Cloud economics is critical for your business to reap the maximum benefits of cloud computing. However, cloud economics is a pervasive cultural practice, so it won’t happen at the snap of your fingers. It will require time and effort for your business to establish cloud economics. 

The first step in controlling your cloud budget and governing your cloud platform is to identify areas of improvement. 2nd Watch created the Innovation Scoring system, our proprietary scoring methodology, to help you identify opportunities for optimization and modernization in a data-driven way. 

Our Innovation Scoring methodology will reveal the underlying problem with your cloud management. We’ll be able to identify the application needing improvement and determine why it is suboptimal. Did you set it up incorrectly and need to move to PaaS with autoscaling capabilities? Or did someone write your application in 2005, and you are in dire need of application modernization? Or is it a combination of both? 2nd Watch designed its Innovation Scoring to pinpoint areas for improvement in your database, infrastructure, and/or application. When we ascertain the source of inefficiency, we can address issues contributing to cloud sprawl and skyrocketing cloud costs. 

To calculate your Innovation Score, we analyze several different dynamics related to your cloud applications. The ratings from each category are then cross-tabulated to generate a total view of your entire cloud environment. Your Innovation Score will not only reveal inefficiencies but also allow us to compare your efforts against other similarly sized companies and make sure you are up to industry standards. 

2nd Watch understands that cloud economics is a cultural undertaking; therefore, when we assign Innovation Scores to our clients, we do so in a way that encourages company-wide participation. To promote engagement and commitment, we’ve gamified our Innovation Scoring: we split our clients’ technical leadership into teams and calculate each team’s score. When we check in with our clients, we reveal each team’s score to showcase which ones are being innovative and taking advantage of the cloud and which ones have room for improvement. 

 

Sample Innovation Scoring Output

 

Our approach to Innovation Scoring promotes friendly competition, which fosters collaboration between teams and a transparent high-level overview of how each team is leveraging the cloud. When our clients are a part of our Innovation Scoring system, it jumpstarts a culture of innovation, transparency, and accountability within their business. 

 

Conclusion

Consider the importance of cloud economics when planning to run your applications in a cloud environment. It is easy to overspend, get overwhelmed, and have no sense of direction. Therefore, cloud economics is beneficial whether you implement it proactively or reactively.

2nd Watch’s Innovation Scoring is a practical first step to getting your cloud budget in order and establishing cloud economics as a standard cultural practice in your organization. Through data and analysis, our Innovation Scoring will help you identify how you can optimize your cloud instance so that you are receiving maximum cloud value for your business. Moreover, Innovation Scoring trains your teams to be communicative and cross-collaborative, which are the traits your company culture needs to succeed in cloud economics.

2nd Watch takes a holistic approach to cloud cost optimization and cloud economics. Contact us, and we’ll show you where and how you can improve your cloud-based applications with our Innovation Scoring.


5 Cloud Optimization Benefits

When making a cloud migration, a common term that gets tossed around is “cloud optimization”. If your organization is new to the cloud, optimizing your environment is essential to ensuring your migration pays off quickly and continues to do so in the long term.

If your organization is already established in the cloud, you may observe higher costs than expected due to cloud sprawl, under-utilized resources, and improper allocation of resources. Cloud optimization helps your organization reduce these costs and improve overall efficiency in the cloud

What is cloud optimization?

The definition of cloud optimization may vary from one cloud service provider to another, but generally, cloud optimization is the process of analyzing, configuring, provisioning, and right-sizing cloud resources to maximize performance and minimize waste for cost efficiency. The reality is that many organizations’ cloud environments are configured in an inefficient manner that creates unnecessary cloud spend. With proper cloud optimization tools and practices, these unnecessary costs can be eliminated.

While cloud optimization is mostly discussed in terms of cloud spend, cost optimization is simply a faucet of cloud optimization and can extend to overall performance and organizational efficiency. Some examples of cloud optimization practices that your organization can adopt right now include:

  • Right-sizing: Matching your cloud computing instance types (i.e. containers and VMs) and sizes with enough resources to sufficiently meet your workload performance and capacity needs to ensure the lowest cost possible.
  • Family Refresh: Replace outdated systems with updated ones to maximize performance.
  • Autoscaling: Scale your resources according to your application demand so you are only paying for what you use.
  • Applying Discounts: Reserved instances (RIs) allow companies to commit to cloud resources for a long period of time. The longer the discount and the more a company is prepared to pre-pay at the beginning of a period, the greater the discount will be. Discounted pricing models like RIs and spot instances will drive down your cloud costs when used according to your workload.
  • Identity use of RIs: Identifying the use of RIs can be an effective way to save money in the cloud if used for suitable loads.
  • Eliminate Waste: Regulating unused resources is a core component of cloud optimization. If you haven’t already considered cloud optimization practices, you are most likely using more resources than necessary or not certain resources to their full capacity.

Why is cloud optimization important?

Overspending in the cloud is a common issue many organizations face by allocating more resources to a workload than necessary. Integrating cloud optimization practices can reap many benefits for your cloud infrastructure and your organization, including the following:

  • Cloud Efficiency: When workload performance, compliance, and cost are continually balanced against the best-fit infrastructure in real-time, efficiency is achieved. Implementing cloud optimization practices will eliminate as much cloud resource waste as possible, increasing the performance of your cloud environment.
  • Cost Savings: Although cloud optimization comes in a variety of forms, cost optimization is the most important component for many organizations. By reducing waste in the cloud, costs are reduced as a byproduct.
  • Greater Visibility: Cloud optimization practices utilize analytics to provide visibility into your cloud environment to make data-driven decisions. Implementing optimization tools also provides cost visibility, so your organization has a better perspective on cloud spend.
  • Increased Productivity: Once a cloud optimization strategy is implemented, IT teams will spend less time trying to solve problems because an optimized environment prevents problems before they occur.
  • Organizational Innovation & Efficiency: Implementing cloud optimization often is accompanied by a cultural shift within organizations such as improved decision-making and collaboration across teams.

What are cloud optimization services?

Public cloud services providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have over 500,000 distinct prices and technical combinations that can overwhelm the most experienced IT organizations and business units. Luckily, there are already services that can help your organization achieve the cloud optimization it needs to drive business outcomes. Cloud optimization services help your organization identify areas of improvement in your cloud for cost savings and efficiency, create an optimization strategy for your organization, and can manage your cloud infrastructure for continuous optimization.

At 2nd Watch, we take a holistic approach to cloud optimization. We have developed various optimization pillars based on real-time data to ensure your cloud environments are running as efficiently as possible. Behind our solutions for cloud optimization is a team of experienced data scientists and architects that help you maximize the performance and returns of your cloud assets. Our services offerings for cloud optimization at 2nd Watch include:

  • Strategy & Planning: Define your optimization strategy with our proven methodology, tailored to meet your desired business outcomes and maximize your results.
  • Cost Optimization Assessment: Gain the visibility necessary to make data-driven decisions. Identify opportunities across our Pillars of Optimization to maximize cost savings and cloud environment efficiency.
  • Spot Instance & Container Optimization: Save up to 90% compared to traditional cloud infrastructure by running both Instances/VMs and Containers on spot resources for relevant workloads.
  • Multi-Cloud Optimization: Cloud optimization on a single public cloud is one challenge but optimizing a hybrid cloud is a whole other challenge. Apply learning from your assessment to optimize your cloud environment for AWS, Microsoft Azure, Google Cloud, and VMware on AWS.
  • Forecasting, Modeling, & Analytics: Understand your past usage, and model and forecast your future needs with the analytical data needed for visibility across your organization.

Our cloud optimization process starts with data, and you have a lot of it. But data alone can lead you astray yielding wasted resources and overspend. There are many other factors to evaluate, such as EDP/EA agreements and Savings Plans/RI Purchases, to ensure you choose the most cost-effective option for your business. Strategically, our data scientists and architects map connections between data and workloads. We then make correlations between how workloads interact with each resource and the optimal financial mechanism to reach your cloud optimization goals.

Cloud Optimization with 2nd Watch

Working with a managed cloud service provider like 2nd Watch will give your organization the expertise needed for cloud optimization. If you want to learn more about cost savings or are interested in fully optimizing your cloud infrastructure, contact us to take your next steps.

 


Datacenter Migration to the Cloud: Why Your Business Should Do it and How to Plan for it

Datacenter migration is ideal for businesses who are looking to exit or reduce on-premises datacenters, migrate workloads as is, modernize apps, or leave another cloud. Executing migrations, however, is no small task, and as a result, there are many enterprise workloads that still run in on-premises datacenters. Often technology leaders want to migrate more of their workloads and infrastructure to a private or public cloud, but they are turned off by the seemingly complex processes and strategies involved in cloud migration or lack the internal cloud skills necessary to make the transition.

 

Though datacenter migration can be a daunting business initiative, the benefits of moving to the cloud are well worth the effort, and the challenges of the migration process can be mitigated by creating a strategy, using the correct tools, and utilizing professional services. Datacenter migration provides a great opportunity to revise, rethink, and improve an organization’s IT architecture. It also ultimately impacts business-critical drivers such as reducing capital expenditure, decreasing ongoing cost, improving scalability and elasticity, improving time-to-market, enacting digital transformation, and attaining improvements in security and compliance.

What are Common Datacenter Migration Challenges?

To ensure a seamless and successful migration to the cloud, businesses should be aware of the potential complexities and risks associated with a datacenter migration. The complexities and risks are addressable, and if addressed properly, organizations can create not only an optimal environment for their migration project, but provide the launch point for business transformation.

Not Understanding Workloads

While cloud platforms are touted as flexible, it is a service-oriented resource and should be treated as such. To be successful in cloud deployment, organizations need to be aware of performance, compatibility, performance requirements (including hardware, software, and IOPS), required software, and adaptability to changes in their workloads. Teams need to run their cloud workloads on the cloud service that is best aligned with the needs of the application and the business.

Not Understanding Licensing

Cloud marketplaces allow businesses to easily “rent” software at an hourly rate. Though the ease of this purchase is enticing, it’s important to remember that it’s not the only option out there. Not all large vendors offer licensing mobility for all applications outside the operating system. In fact, companies should leverage existing relationships with licensing brokers. Just because a business is migrating to the cloud doesn’t mean that a business should abandon existing licensing channels. Organizations should familiarize themselves with their choices for licensing to better maximize ROI.

Not Looking for Opportunities to Incorporate PaaS

Platform as a service (PaaS) is a cloud computing model where a cloud service provider delivers hardware and software tools to users over the internet versus a build-it-yourself Infrastructure as a Service (IaaS) model. The PaaS provider abstracts everything—servers, networks, storage, operating system software, databases, development tools—enabling teams to focus on their application. This enables PaaS customers to build, test, deploy, run, update and scale applications more quickly and inexpensively than they could if they had to build out and manage an IaaS environment on top of their application. While businesses shouldn’t feel compelled to rewrite all their network configurations and operating environments, they should see where they can have quick PaaS wins to replace aging systems.

Not Proactively Preparing for Cloud Migration

Building a new datacenter is a major IT event and usually goes hand-in-hand with another significant business event, such as an acquisition, or outgrowing the existing datacenter. In the case of moving to a new on-premises datacenter, the business will slow down as the company takes on a physical move. Migrating to the cloud is usually not coupled with an eventful business change, and as a result, business does not stop when a company chooses to migrate to the cloud. Therefore, a critical part of cloud migration success is designing the whole process as something that can run along with other IT changes that occur on the same timeline. Application teams frequently adopt cloud deployment practices months before their systems actually migrate to the cloud. By doing so, the team is ready before their infrastructure is even prepared, which makes cloud migration a much smoother event. Combining cloud events with other changes in this manner will maximize a company’s ability to succeed.

Treating and Running the Cloud Environment Like Traditional Datacenters

It seems obvious that cloud environments should be treated differently from traditional datacenters, but this is actually a common pitfall for organizations to fall in. For example, preparing to migrate to the cloud should not include traditional datacenter services, like air conditioning, power supply, physical security, and other datacenter infrastructure, as a part of the planning. Again, this may seem very obvious, but if a business is used to certain practices, it can be surprisingly difficult to break entrenched mindsets and processes.

How to Plan for a Datacenter Migration

While there are potential challenges associated with datacenter migration, the benefits of moving from physical infrastructures, enterprise datacenters, and/or on-premises data storage systems to a cloud datacenter or a hybrid cloud system is well worth the effort.

Now that we’ve gone over the potential challenges of datacenter migration, how do businesses enable a successful datacenter migration while effectively managing risk?

Below, we’ve laid out a repeatable high-level migration strategy that is broken down into four phases: Discovery, Planning, Execution, and Optimization. By leveraging a repeatable framework as such, organizations create the opportunity to identify assets, minimize migration costs and risks using a multi-phased migration approach, enable deployment and configuration, and finally, optimize the end state.

Phase 1: Discovery

During the Discovery phase, companies should understand and document the entire datacenter footprint. This means understanding the existing hardware mapping, software applications, storage layers (databases, file shares), operating systems, networking configurations, security requirements, models of operation (release cadence, how to deploy, escalation management, system maintenance, patching, virtualization, etc.), licensing and compliance requirements, as well as other relevant assets.

The objective of this phase is to have a detailed view of all relevant assets and resources of the current datacenter footprint.

The key milestones in the Discovery phase are:

  • Creating a shared datacenter inventory footprint: Every team and individual who is a part of the datacenter migration to the cloud should be aware of the assets and resources that will go live.
  • Sketching out an initial cloud platform foundations design: This involves identifying centralized concepts of the cloud platform organization such as folder structure, Identity and Access Management (IAM)  model, network administration model, and more.

As a best practice, companies should engage in cross-functional dialogue within their organizations, including teams from IT to Finance to Program Management, ensuring everyone is aligned on changes to support future cloud processes. Furthermore, once a business has migrated from a physical datacenter to the cloud, they should consider whether their datacenter team is trained to support the systems and infrastructure of the cloud provider.

Phase 2: Planning

When a company is entering the Planning phase, they are leveraging the assets and deliverables gathered in the Discovery phase to create migration waves to be sequentially deployed into non-production and production environments.

Typically, it is best to target non-production migration waves first, which helps identify the sequence of waves to migrate first. To start, consider the following:

  • Mapping the current server inventory to the cloud platform’s machine types: Each current workload will generally run on a virtual machine type with similar computing power, memory, and disk. Oftentimes though, the current workload is overprovisioned, so each workload should be evaluated to ensure that it is migrated onto the right VM for that given workload.
  • Timelines: Businesses should lay out their target dates for each migration project.
  • Workloads in each grouping: Figure out what migration waves are grouped by i.e. non-production vs. production applications.
  • The cadence of code releases: Factor in any upcoming code releases as this may impact the decision of whether to migrate sooner or later.
  • Time for infrastructure deployment and testing: Allocate adequate time for testing infrastructures before fully moving over to the cloud.
  • The number of application dependencies: Migration order should be influenced by the number of application dependencies. The applications with the fewest dependencies are generally good candidates for migration first. In contrast, wait to migrate an application that depends on multiple databases.
  • Migration complexity and risk: Migration order should also take complexity into consideration. Tackling simpler aspects of the migration first will generally yield a more successful migration.

As mentioned above, the best practice for migration waves is to start with more predictable and simple workloads. For instance, companies should start with migrating file shares first, then databases and domain controlled, and save the apps for last. However, sometimes the complexity and dependencies don’t allow for a straightforward migration. In these cases, utilizing an experienced service provider who has experience with these complex environments will be prudent.

Phase 3: Execution

Once companies have developed a plan, they can bring them to fruition in the Execution phase. Here, businesses will need to be deliberate about the steps they take and the configurations they develop.

In the Execution phase, companies will put into place infrastructure components and ensure they are configured appropriately, like IAM, networking, firewall rules, and Service Accounts. Here is also where teams should test the applications on the infrastructure configurations to ensure that they have access to their databases, file shares, web servers, load balancers, Active Directory servers, and more. Execution also includes using logging and monitoring to ensure applications continue to function with the necessary performance.

In order for the Execution phase to be successful, there needs to be agile application debugging and testing. Moreover, organizations should have both a short and long-term plan for resolving blockers that may come up during the migration. The Execution phase is iterative and the goal should be to ensure that applications are fully tested on the new infrastructure.

Phase 4: Optimization

The last phase of a datacenter migration project is Optimization. After a business has migrated its workloads to the cloud, it should conduct periodic reviews and planning to optimize the workloads. Optimization includes the following activities:

  • Resizing machine types and disks
  • Leveraging software like Terraform for more agile and predictable deployments
  • Improving automation to reduce operational overhead
  • Bolstering integration with logging, monitoring, and alerting tools
  • Adopting managed services to reduce operational overhead

Cloud services provide visibility into resource consumption and spending, and organizations can more easily identify the compute resources they are paying for. Additionally, businesses can identify virtual machines they need or don’t need. By migrating from a traditional datacenter environment to a cloud environment, teams will be able to optimize their workloads due to the powerful tools that cloud platforms provide.

How do I take the first step in datacenter migration?

While undertaking a full datacenter migration is a significant project, it is worthwhile. The migration framework we’ve provided can help any business break down the process into manageable stages and move fully to the cloud.

When you’re ready to take the first step, we’re here to help to make the process even easier. Contact a 2nd Watch advisor today to get started with your migration to the cloud.

 


Cloud Migration Challenges: 6 Reasons the Cloud Might Not be What You Think it Is

A lot of enterprises migrate to the public cloud because they see everyone else doing it. And while you should stay up on the latest and greatest innovations – which often happen in the cloud – you need to be aware of the realities of the cloud and understand different cloud migration strategies. You need to know why you’re moving to the cloud. What’s your goal? And what outcomes are you seeking? Make sure you know what you’re getting your enterprise into before moving forward in your cloud journey.

1. Cloud technology is not a project, it’s a constant

Be aware that while there is a starting point to becoming more cloud native – the migration – there is no stopping point. The migration occurs, but the transformation, development, innovation, and optimization is never over.

There are endless applications and tools to consider, your organization will evolve over time, technology changes regularly, and user preferences change even faster. Fueled by your new operating system, cloud computing puts you into continuous motion. While continuous motion is positive for outcomes, you need to be ready to ride the wave regardless of where it goes. Once you get on, success requires that you stay there.

2. Flex-agility is necessary to survival

Flexibility + agility = flex-agility, and you need it in the cloud. Flex-agility enables enterprises to adapt to the risks and unknowns occurring in the world. The pandemic continues to highlight the need for flex-agility in business. Organizations further along in their cloud journeys were able to quickly establish remote workforces, adjust customer interactions, communicate completely and effectively, and ultimately, continue running. While the pandemic was unprecedented, more commonly, flex-agility is necessary in natural disasters like floods, hurricanes, and tornadoes; after a ransomware or phishing attack; or when an employee’s device is lost, stolen, or destroyed.

3. You still have to move faster than the competition

Gaining or maintaining your competitive edge in the cloud has a lot to do with speed. Whether it’s the dog-eat-dog nature of your industry, macroeconomics, or a political environment, these are the things that speed up innovation. You might not have any control over these things, but they’re shaping the way consumers interact with brands. Again, when you think about how the digital transformation evolved during the pandemic, you saw winning business move the fastest. The cloud is an amazing opportunity to meet all the demands of your environment, but if you’re not looking forward, forecasting trends, and moving faster than the competition, you could fall behind.

4. People are riskier than technology

In many ways, the technology is the easiest part of an enterprise cloud strategy. It’s the people where a lot of risk comes into play. You may have a great strategy with clean processes and tactics, but if the execution is poor, the business can’t succeed. A recent survey revealed that 85% of organizations report deficits in cloud expertise, with the top three areas being cloud platforms, cloud native engineering, and security. While business owners acknowledge the importance of these skills, they’re still struggling to attract the caliber of talent necessary.

In addition to partnering with cloud service experts to ensure a capable team, organizations are also reinventing their technical culture to work more like a startup. This can incentivize the cloud-capable with hybrid work environments, an emphasis on collaboration, use of the agile framework, and fostering innovation.

5. Cost-savings is not the best reason to migrate to the cloud

Buy-in from executives is key for any enterprise transitioning to the cloud. Budget and resources are necessary to continue moving forward, but the business value of a cloud transformation isn’t cost savings. Really, it’s about repurposing dollars to achieve other things. At the end of the day, companies are focused on getting customers, keeping customers, and growing customers, and that’s what the cloud helps to support.

By innovating products and services in a cloud environment, an organization is able to give customers new experiences, sell them new things, and delight them with helpful customer service and a solid user experience. The cloud isn’t a cost center, it’s a business enabler, and that’s what leadership needs to hear.

6. Cloud migration isn’t always the right answer

Many enterprises believe that the process of moving to the cloud will solve all of their problems. Unfortunately, the cloud is just the most popular technology operating system platform today. Sure, it can help you reach your goals with easy-to-use functionality, automated tools, and modern business solutions, but it takes effort to utilize and apply those resources for success.

For most organizations, moving to the cloud is the right answer, but it could be the wrong time. The organization might not know how it wants to utilize cloud functionality. Maybe outcomes haven’t been identified yet, the business strategy doesn’t have buy-in from leadership, or technicians aren’t aware of the potential opportunities. Another issue stalling cloud migration is internal cloud-based expertise. If your technicians aren’t cloud savvy enough to handle all the moving parts, bring on a collaborative cloud advisor to ensure success.

Ready for the next step in your cloud journey?

Cloud Advisory Services at 2nd Watch provide you with the cloud solution experts necessary to reduce complexity and provide impartial guidance throughout migration, implementation, and adoption. Whether you’re just curious about the cloud, or you’re already there, our advanced capabilities support everything from platform selection and cost modeling, to app classification, and migrating workloads from your on-premises data center. Contact us to learn more!

Lisa Culbert, Marketing


2nd Watch Uses Redshift to Improve Client Optimization

Improving our use of Redshift: Then and now

Historically, and common among enterprise IT processes, the 2nd Watch optimization team was pulling in cost usage reports from Amazon and storing them in S3 buckets. The data was then loaded into Redshift, Amazon’s cloud data warehouse, where it could be manipulated and analyzed for client optimization. Unfortunately, the Redshift cluster filled up quickly and regularly, forcing us to spend unnecessary time and resources on maintenance and clean up. Additionally, Redshift requires a large cluster to work with, so the process for accessing and using data became slow and inefficient.

Of course, to solve for this we could have doubled the size, and therefore the cost, of our Redshift usage, but that went against our commitment to provide cost-effective options for our clients. We also could have considered moving to a different type of node that is storage optimized, instead of compute optimized.

Lakehouse Architecture for speed improvements and cost savings

The better solution we uncovered, however, was to follow the Lakehouse Architecture pattern to improve our use of Redshift to move faster and with more visibility, without additional storage fees. The Lakehouse Architecture is a way to strike a balance between cost and agility by selectively moving data in and out of Redshift depending on the processing speed needed for the data. Now, after a data dump to S3, we use AWS Glue crawlers and tables to create external tables in the Glue Data Catalogues. The external tables or schemas are linked to the Redshift cluster, allowing our optimization team to read from S3 to Redshift using Redshift Spectrum.

Our cloud data warehouse remains tidy without dedicated clean-up resources, and we can query the data in S3 via Redshift without having to move anything. Even though we’re using the same warehouse, we’ve optimized its use for the benefit of both our clients and 2nd Watch best practices. In fact, our estimated savings are $15,000 per month, or 100% of our previous Redshift cost.

How we’re using Redshift today

With our new model and the benefits afforded to clients, 2nd Watch is applying Redshift for a variety of optimization opportunities.

Discover new opportunities for optimization. By storing and organizing data related to our clients’ AWS, Azure, and/or Google Cloud usage versus spend data, the 2nd Watch optimization team can see where further optimization is possible. Improved data access and visibility enables a deeper examination of cost history, resource usage, and any known RIs or savings plans.

Increase automation and reduce human error. The new model allows us to use DBT (data build tool) to complete SQL transforms on all data models used to feed reporting. These reports go into our dashboards and are then presented to clients for optimization. DBT empowers analysts to transform warehouse data more efficiently, and with less risk, by relying on automation instead of spreadsheets.

Improve efficiency from raw data to client reporting. Raw data that lives in a data lake in s3 is transformed and organized into a structured data lake that is prepared to be defined in AWS Glue Catalog tables. This gives the analysts access to query the data from Redshift and use DBT to format the data into useful tables. From there, the optimization team can make data-based recommendations and generate complete reports for clients.

In the future, we plan on feeding a power business intelligence dashboard directly from Redshift, further increasing efficiency for both our optimization team and our clients.

Client benefits with Redshift optimization

  • Cost savings: Only pay for the S3 storage you use, without any storage fees from Redshift.
  • Unlimited data access: Large amounts of old data are available in the data lake, which can be joined across tables and brought into Redshift as needed.
  • Increased data visibility: Greater insight into data enables us to provide more optimization opportunities and supports decision making.
  • Improved flexibility and productivity: Analysts can get historical data within one hour, rather than waiting 1-2 weeks for requests to be fulfilled.
  • Reduced compute cost: By shifting the compute cost of loading data into to Amazon EKS.

-Spencer Dorway, Data Engineer


2nd Watch Enhances Managed Optimization service in partnership with Spot by NetApp

Today, we’re excited to announce a new enhancement to our Managed Optimization service – Spot Instance and Container Optimization – for enterprise IT departments looking to more thoughtfully allocate cloud resources and carefully manage cloud spend.

Enterprises using cloud infrastructure and services today are seeing higher cloud costs than anticipated due to factors such as cloud sprawl, shadow IT, improper allocation of cloud resources, and a failure to use the most efficient resource based on workload. To address these concerns, we take a holistic approach to Optimization and have partnered with Spot by NetApp to enhance our Managed Optimization service.

The service works by recommending workloads that can take advantage of the cost savings associated with running instances, VMs and containers on “spot” resources. A spot resource is an unused cloud resource that is available for sale in a marketplace for less than the on-demand price. Because spot resources enable users to request unused EC2 instances or VMs to run their workloads at steep discounts, users can significantly lower their cloud compute costs, up to 90% by some measures. To deliver its service, we’re partnering with Spot, whose cloud automation and optimization solutions help companies maximize return on their cloud investments.

“Early on, spot resources were difficult to manage, but the tasks associated with managing them can now be automated, making the use of spot a smart approach to curbing cloud costs,” says Chris Garvey, EVP of Product at 2nd Watch. “Typically, non-mission critical workloads such as development and staging have been able to take advantage of the cost savings of spot instances.

By combining 2nd Watch’s expert professional services, managed cloud experience and solutions from Spot by NetApp, 2nd Watch has been able to help companies use spot resources to run production environments.”

“Spot by NetApp is thrilled to be working with partners like  2nd Watch to help customers maximize the value of their cloud investment,” says Amiram Shachar, Vice President and General Manager of Spot by NetApp.  “Working together, we’re helping organizations go beyond one-off optimization projects to instead ensure continuous optimization of their cloud environment using Spot’s unique technology. With this new offering, 2nd Watch demonstrates a keen understanding of this critical customer need and is leveraging the best technology in the market to address it.”


You’re on AWS. Now What? 5 Strategies to Increase Your Cloud’s Value

Now that you’ve migrated your applications to AWS, how can you take the value of being on the cloud to the next level? To provide guidance on next steps, here are 5 things you should consider to amplify the value of being on AWS.


Top 10 Cloud Optimization Best Practices

Cloud optimization is a continuous process specific to a company’s goals, but there are some staple best practices all optimization projects should follow. Here are our top 10.

1. Begin with the end in mind

Business leaders and stakeholders throughout the organization should know exactly what they’re trying to achieve with a cloud optimization project. Additionally, this goal should be revisited on a regular basis to make sure you remain on track to achievement. Create measures to gauge success at different points and follow the agreed upon order of operations to complete the process.

2. Create structure around governance and responsibility

Overprovisioning is one of the most common issues adding unnecessary costs to your bottom line. Implement specific and regulated structure around governance and responsibility for all teams involved in optimization to control any unnecessary provisioning. Check in regularly to make sure teams are following the structure and you only have the tools you need and are actively using.

3. Get all the Data you Need

Cloud optimization is a data-driven exercise. To be successful, you need insight into a range of data pieces. Not only do you need to identify what data you need and be able to get it, but you also need to know what data you’re missing and figure out how to get it. Collaborate with internal teams to make sure essential data isn’t siloed or already being collected. Additionally, regularly clean and validate data to ensure reliability for data-based decision making.

4. Implement Tagging Practices

To best utilize the data you have, organizing and maintaining it with strict tagging practices in necessary. Implement a system that works from more than just a technical standpoint. You can also use tagging to launch instances, control your auto parking methodology, or in scheduling. Tagging helps you understand the data and see what is driving spend. Whether it’s an environment tag, owner tag, or application tag, tagging provides clarity into spend, which is the         key to optimization.

5. Gain Visibility into Spend

Tagging is one way to see where your spend is going, but it’s not the only way required. Manage accounts regularly to make sure inactive accounts aren’t continuing to be billed. Set up an internal mechanism to review with your app teams and hold them accountable. It can be as simple as a dashboard with tagging grading, as long as it lets the data speak for itself.

6. Hire the Right Technical Expertise

Get more out of your optimization with the right technical expertise on your internal team. Savvy technicians should work alongside the business teams to drive the goals of optimization throughout the process. Without collaboration between these departments, you risk moving in differing directions with multiple end goals in mind. For example, one team might be acting with performance or a technical aspect in mind without realizing the implication on optimization. Partnering with optimization experts can also keep teams aligned and moving toward the same goal.

7. Select the Right Tools and Stick with Them

Tools are a part of the optimization process, but they can’t solve problems alone. Additionally, there are an abundance of tools to choose from, many of which have similar functionality and outcomes. Find the right tools for your goals, facilitate adoption, and give them the time and data necessary to produce results. Don’t get distracted by every new, shiny tool available and the “tool champions” fighting for one over another. Avoid the costs of overprovisioning by checking usage regularly and maintaining the governance structure established throughout your teams.

8. Make sure your Tools are Working.

Never assume a tool or a process you’ve put in place is working. In fact, it’s better to assume it’s not working and consistently check its efficiency. This regular practice of confirming the tools you have are both useful and being used will help you avoid overprovisioning and unnecessary spending. For tools to be effective and serve their purpose, you need enough visibility to determine how the tool is contributing to your overall end goal.

9. Empower Someone to Drive the Process.

The number one call to action for anyone diving into optimization is to appoint a leader. Without someone specific, qualified, and active in managing the project with each stakeholder and team involved, you won’t accomplish your goals. Empower this leader internally to gain the respect and attention necessary for employees to understand the importance of continuous optimization and contribute on their part.

10. Partner with Experts.

Finding the right partner to help you optimize efficiently and effectively will make the process easier at every turn. Bringing in an external driver who has the know-how and experience to consult on strategy through implementation, management, and replication is a smart move with fast results.

2nd Watch takes a holistic approach to cloud optimization with a team of experienced data scientists and architects who help you maximize performance and returns on your cloud assets. Are you ready to start saving? Let us help you define your optimization strategy to meet your business needs and maximize your results. Contact Us to take the next step in your cloud journey.

-Willy Sennott, Optimization Practice Manager


Steps to Continuous Cloud Optimization

Cloud optimization is an ongoing task for any organization driven by data. If you don’t believe you need to optimize, or you’re already optimized, you may not have the data necessary to see where you’re over-provisioned and losing spend. Revisit the optimization pillars frequently to best evolve with and take advantage of everything the cloud has to offer.

Begin with the end in mind

The big question is, where are you trying to go? This question should constantly be revisited with internal stakeholders and business leaders. Define the process that will get you there and follow the order of operations identified to reach your optimization goal. Losing sight of the purpose, getting caught up in shiny new tools, or failing to incorporate the right teams could lead you off path.

Empower someone to drive the process

This is pivotal because without this appointed person, cloud optimization will not happen. Give someone the power to drive optimization policies throughout the organization. Companies most successful in achieving optimization have a good internal mandate to make it a priority. When messages come from the top, and are enforced through a project champion, people tend to pay attention and management is much more effective.

Fill the data gaps

Cloud optimization is a data driven exercise, so you need all the data you can get to make it valuable. Your tools will be much more compelling when they have the data necessary to make smart recommendations. Understand where to get the data in your organization, and figure out how to get any data you don’t have. Verify your data regularly to confirm accuracy for intelligent decision making geared toward optimization.

Implement tagging practices

The practice of not only implementing, but also actively enforcing your tagging policies, drives optimization. Be it an environment tag, owner tag, or application tag, tags help you understand your data and what or who is driving spend.

Enforce accountability

While lack of tagging and data gaps prevent visibility, overprovisioning is also an accountability issue. Just look at the hundred plus AWS services alone that show up on a bill for an organization that’s a long-time user. It’s not uncommon for 20-30% of the total to be attributed to services they never even knew existed at the time they migrated to the cloud.

Hold your app teams accountable with an internal mechanism that lets the data speak for itself. It can be as simple as a dashboard with tagging grading, because everybody understands those results.

Rearchitect and refactor

Migrating to the cloud via a lift and shift can be a valuable strategy for certain organizations. However, after a few months in the cloud, you need to intentionally move forward with the next steps. Reevaluating, refactoring and rearchitecting will occur multiple times along the way. Without them, you end up spending more money than necessary.

Continuous optimization is a must

Optimization is not a one and done project because the possibilities are constantly evolving. Almost every day, a new technology is introduced. Maybe it’s a new instance family or tool. A couple years ago it was containers, and before that it was serverless. Being aware of these new and improved technologies is key to maintaining continuous optimization.

Engage with an experienced partner

There are a lot of factors to consider, evaluate, and complete as part of your cloud optimization practice. To maximize your optimization efforts, you want someone experienced to guide your strategy.

One benefit to partnering with an optimization expert, like 2nd Watch, is that an external partner can diffuse the internal conflicts typically associated with optimization. So much of the process is navigating internal politics and red tape. A partner helps meld the multiple layers of your business with a holistic approach that ensures your cloud is running as efficiently as possible.

-Willy Sennott, Optimization Practice Manager