5 Benefits Gained from Cloud Optimization

When making a cloud migration, a common term that gets tossed around is “cloud optimization”. If your organization is new to the cloud, optimizing your environment is essential to ensuring your migration pays off quickly and continues to do so in the long term.

If your organization is already established in the cloud, you may observe higher costs than expected due to cloud sprawl, under-utilized resources, and improper allocation of resources. Cloud optimization helps your organization reduce these costs and improve overall efficiency in the cloud

What is cloud optimization?

The definition of cloud optimization may vary from one cloud service provider to another, but generally, cloud optimization is the process of analyzing, configuring, provisioning, and right-sizing cloud resources to maximize performance and minimize waste for cost efficiency. The reality is that many organizations’ cloud environments are configured in an inefficient manner that creates unnecessary cloud spend. With proper cloud optimization tools and practices, these unnecessary costs can be eliminated.

While cloud optimization is mostly discussed in terms of cloud spend, cost optimization is simply a faucet of cloud optimization and can extend to overall performance and organizational efficiency. Some examples of cloud optimization practices that your organization can adopt right now include:

  • Right-sizing: Matching your cloud computing instance types (i.e. containers and VMs) and sizes with enough resources to sufficiently meet your workload performance and capacity needs to ensure the lowest cost possible.
  • Family Refresh: Replace outdated systems with updated ones to maximize performance.
  • Autoscaling: Scale your resources according to your application demand so you are only paying for what you use.
  • Applying Discounts: Reserved instances (RIs) allow companies to commit to cloud resources for a long period of time. The longer the discount and the more a company is prepared to pre-pay at the beginning of a period, the greater the discount will be. Discounted pricing models like RIs and spot instances will drive down your cloud costs when used according to your workload.
  • Identity use of RIs: Identifying the use of RIs can be an effective way to save money in the cloud if used for suitable loads.
  • Eliminate Waste: Regulating unused resources is a core component of cloud optimization. If you haven’t already considered cloud optimization practices, you are most likely using more resources than necessary or not certain resources to their full capacity.

Why is cloud optimization important?

Overspending in the cloud is a common issue many organizations face by allocating more resources to a workload than necessary. Integrating cloud optimization practices can reap many benefits for your cloud infrastructure and your organization, including the following:

  • Cloud Efficiency: When workload performance, compliance, and cost are continually balanced against the best-fit infrastructure in real-time, efficiency is achieved. Implementing cloud optimization practices will eliminate as much cloud resource waste as possible, increasing the performance of your cloud environment.
  • Cost Savings: Although cloud optimization comes in a variety of forms, cost optimization is the most important component for many organizations. By reducing waste in the cloud, costs are reduced as a byproduct.
  • Greater Visibility: Cloud optimization practices utilize analytics to provide visibility into your cloud environment to make data-driven decisions. Implementing optimization tools also provides cost visibility, so your organization has a better perspective on cloud spend.
  • Increased Productivity: Once a cloud optimization strategy is implemented, IT teams will spend less time trying to solve problems because an optimized environment prevents problems before they occur.
  • Organizational Innovation & Efficiency: Implementing cloud optimization often is accompanied by a cultural shift within organizations such as improved decision-making and collaboration across teams.

What are cloud optimization services?

Public cloud services providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have over 500,000 distinct prices and technical combinations that can overwhelm the most experienced IT organizations and business units. Luckily, there are already services that can help your organization achieve the cloud optimization it needs to drive business outcomes. Cloud optimization services help your organization identify areas of improvement in your cloud for cost savings and efficiency, create an optimization strategy for your organization, and can manage your cloud infrastructure for continuous optimization.

At 2nd Watch, we take a holistic approach to cloud optimization. We have developed various optimization pillars based on real-time data to ensure your cloud environments are running as efficiently as possible. Behind our solutions for cloud optimization is a team of experienced data scientists and architects that help you maximize the performance and returns of your cloud assets. Our services offerings for cloud optimization at 2nd Watch include:

  • Strategy & Planning: Define your optimization strategy with our proven methodology, tailored to meet your desired business outcomes and maximize your results.
  • Cost Optimization Assessment: Gain the visibility necessary to make data-driven decisions. Identify opportunities across our Pillars of Optimization to maximize cost savings and cloud environment efficiency.
  • Spot Instance & Container Optimization: Save up to 90% compared to traditional cloud infrastructure by running both Instances/VMs and Containers on spot resources for relevant workloads.
  • Multi-Cloud Optimization: Cloud optimization on a single public cloud is one challenge but optimizing a hybrid cloud is a whole other challenge. Apply learning from your assessment to optimize your cloud environment for AWS, Microsoft Azure, Google Cloud, and VMware on AWS.
  • Forecasting, Modeling, & Analytics: Understand your past usage, and model and forecast your future needs with the analytical data needed for visibility across your organization.

Our cloud optimization process starts with data, and you have a lot of it. But data alone can lead you astray yielding wasted resources and overspend. There are many other factors to evaluate, such as EDP/EA agreements and Savings Plans/RI Purchases, to ensure you choose the most cost-effective option for your business. Strategically, our data scientists and architects map connections between data and workloads. We then make correlations between how workloads interact with each resource and the optimal financial mechanism to reach your cloud optimization goals.

Cloud Optimization with 2nd Watch

Working with a managed cloud service provider like 2nd Watch will give your organization the expertise needed for cloud optimization. If you want to learn more about cost savings or are interested in fully optimizing your cloud infrastructure, contact us to take your next steps.

 

Data Center Migration to the Cloud: Why Your Business Should Do it and How to Plan for it

Data center migration is ideal for businesses who are looking to exit or reduce on-premises data centers, migrate workloads as is, modernize apps, or leave another cloud. Executing migrations, however, is no small task, and as a result, there are many enterprise workloads that still run in on-premises data centers. Often technology leaders want to migrate more of their workloads and infrastructure to private or public cloud, but they are turned off by the seemingly complex processes and strategies involved in cloud migration, or lack the internal cloud skills necessary to make the transition.

 

Though data center migration can be a daunting business initiative, the benefits of moving to the cloud is well worth the effort, and the challenges of the migration process can be mitigated by creating a strategy, using the correct tools, and utilizing professional services. Data center migration provides a great opportunity to revise, rethink, and improve an organization’s IT architecture. It also ultimately impacts business critical drivers such as reducing capital expenditure, decreasing ongoing cost, improving scalability and elasticity, improving time-to-market, enacting digital transformation and attaining improvements in security and compliance.

What are Common Data Center Migration Challenges?

To ensure a seamless and successful migration to the cloud, businesses should be aware of the potential complexities and risks associated with data center migration. The complexities and risks are addressable, and if addressed properly, organizations can create not only an optimal environment for their migration project, but provide the launch point for business transformation.

Not Understanding Workloads

While cloud platforms are touted as flexible, it is a service-oriented resource, and it should be treated as such. To be successful in cloud deployment, organizations need to be aware of performance, compatibility, performance requirements (including hardware, software, and IOPS), required software, and adaptability to changes in their workloads. Teams need to run their cloud workloads on the cloud service that is best aligned with the needs of the application and the business.

Not Understanding Licensing

Cloud marketplaces allow businesses to easily “rent” software at an hourly rate. Though the ease of this purchase is enticing, it’s important to remember that it’s not the only option out there. Not all large vendors offer licensing mobility for all applications outside the operating system. In fact, companies should leverage existing relationships with licensing brokers. Just because a business is migrating to the cloud doesn’t mean that a business should abandon existing licensing channels. Organizations should familiarize themselves with their choices for licensing to better maximize ROI.

Not Looking for Opportunities to Incorporate PaaS

Platform as a service (PaaS) is a cloud computing model where a cloud service provider delivers hardware and software tools to users over the internet versus a build-it-yourself Infrastructure as a Service (IaaS) model. The PaaS provider abstracts everything—servers, networks, storage, operating system software, databases, development tools—enabling teams to focus on their application. This enables PaaS customers to build, test, deploy, run, update and scale applications more quickly and inexpensively than they could if they had to build out and manage an IaaS environment on top of their application. While businesses shouldn’t feel compelled to rewrite all their network configurations and operating environments, they should see where they can have quick PaaS wins to replace aging systems.

Not Proactively Preparing for Cloud Migration

Building a new data center is a major IT event and usually goes hand-in-hand with another significant business event, such as an acquisition, or outgrowing the existing data center. In the case of moving to a new on-premises data center, business will slow down as the company takes on a physical move. Migrating to the cloud is usually not coupled with an eventful business change, and as a result, business does not stop when a company chooses to migrate to the cloud. Therefore, a critical part of cloud migration success is designing the whole process as something that can run along with other IT changes that occur on the same timeline. Application teams frequently adopt cloud deployment practices months before their systems actually migrate to the cloud. By doing so, the team is ready before their infrastructure is even prepared, which makes cloud migration a much smoother event. Combining cloud events with other changes in this manner will maximize a company’s ability to succeed.

Treating and Running the Cloud Environment Like Traditional Data Centers

It seems obvious that cloud environments should be treated differently from traditional data centers, but this is actually a common pitfall for organizations to fall in. For example, preparing to migrate to the cloud should not include traditional data center services, like air conditioning, power supply, physical security, and other data center infrastructure, as a part of the planning. Again, this may seem very obvious, but if a business is used to certain practices, it can be surprisingly difficult to break entrenched mindsets and processes.

How to Plan for a Data Center Migration

While there are potential challenges associated with data center migration, the benefits of moving from physical infrastructures, enterprise data centers and/or on-premises data storage systems to a cloud data center or a hybrid cloud system is well worth the effort.

Now that we’ve gone over the potential challenges of data center migration, how do businesses enable a successful data center migration while effectively managing risk?

Below, we’ve laid out a repeatable high level migration strategy that is broken down into four phases: Discovery, Planning, Execution, and Optimization. By leveraging a repeatable framework as such, organizations create the opportunity to identify assets, minimize migration costs and risks using a multi-phased migration approach, enable deployment and configuration, and finally, optimize the end state.

Phase 1: Discovery

During the Discovery phase, companies should understand and document the entire data center footprint. This means understanding the existing hardware mapping, software applications, storage layers (databases, file shares), operating systems, networking configurations, security requirements, models of operation (release cadence, how to deploy, escalation management, system maintenance, patching, virtualization, etc.), licensing and compliance requirements, as well as other relevant assets.

The objective of this phase is to have a detailed view of all relevant assets and resources of the current data center footprint.

The key milestones in the Discovery phase are:

  • Creating a shared data center inventory footprint: Every team and individual who is a part of the data center migration to the cloud should be aware of the assets and resources that will go live.
  • Sketching out an initial cloud platform foundations design: This involves identifying centralized concepts of the cloud platform organization such as folder structure, Identity and Access Management (IAM)  model, network administration model, and more.

As a best practice, companies should engage in cross-functional dialogue within their organizations, including teams from IT to Finance to Program Management, ensuring everyone is aligned on changes to support future cloud processes. Furthermore, once a business has migrated from a physical data center to the cloud, they should consider whether their data center team is trained to support the systems and infrastructure of the cloud provider.

Phase 2: Planning

When a company is entering the Planning phase, they are leveraging the assets and deliverables gathered in the Discovery phase to create migration waves to be sequentially deployed into non-production and production environments.

Typically, it is best to target non-production migration waves first, which helps identify the sequence of waves to migrate first. To start, consider the following:

  • Mapping the current server inventory to the cloud platform’s machine types: Each current workload will generally run on a virtual machine type with similar computing power, memory and disk. Oftentimes though, the current workload is overprovisioned, so each workload should be evaluated to ensure that it is migrated onto the right VM for that given workload.
  • Timelines: Businesses should lay out their target dates for each migration project.
  • Workloads in each grouping: Figure out what migration waves are grouped by i.e. non-production vs. production applications.
  • Cadence of code releases: Factor in any upcoming code releases as this may impact the decision of whether to migrate sooner or later.
  • Time for infrastructure deployment and testing: Allocate adequate time for testing infrastructures before fully moving over to the cloud.
  • Number of application dependencies: Migration order should be influenced by the number of application dependencies. The applications with the fewest dependencies are generally good candidates for migration first. In contrast, wait to migrate an application that depends on multiple databases.
  • Migration complexity and risk: Migration order should also take complexity into consideration. Tackling simpler aspects of the migration first will generally yield a more successful migration.

As mentioned above, the best practice for migration waves is to start with more predictable and simple workloads. For instance, companies should start with migrating file shares first, then databases and domain controlled, and save the apps for last. However, sometimes the complexity and dependencies don’t allow for a straightforward migration. In these cases, utilizing an experienced service provider who has experience with these complex environments will be prudent.

Phase 3: Execution

Once companies have developed a plan, they can bring them to fruition in the Execution phase. Here, businesses will need to be deliberate about the steps they take and the configurations they develop.

In the Execution phase, companies will put into place infrastructure components and ensure they are configured appropriately, like IAM, networking, firewall rules, and Service Accounts. Here is also where teams should test the applications on the infrastructure configurations to ensure that they have access to their databases, file shares, web servers, load balancers, Active Directory servers and more. Execution also includes using logging and monitoring to ensure applications continue to function with the necessary performance.

In order for the Execution phase to be successful, there needs to be agile application debugging and testing. Moreover, organizations should have both a short and long term plan for resolving blockers that may come up during the migration. The Execution phase is iterative and the goal should be to ensure that applications are fully tested on the new infrastructure.

Phase 4: Optimization

The last phase of a data center migration project is Optimization. After a business has migrated their workloads to the cloud, they should conduct periodic review and planning to optimize the workloads. Optimization includes the following activities:

  • Resizing machine types and disks
  • Leveraging a software like Terraform for more agile and predictable deployments
  • Improving automation to reduce operational overhead
  • Bolstering integration with logging, monitoring, and alerting tools
  • Adopting managed services to reduce operational overhead

Cloud services provide visibility into resource consumption and spend, and organizations can more easily identify the compute resources they are paying for. Additionally, businesses can identify virtual machines they need or don’t need. By migrating from a traditional data center environment to a cloud environment, teams will be able to more easily optimize their workloads due to the powerful tools that cloud platforms provide.

How do I take the first step in data center migration?

While undertaking a full data center migration is a significant project, it is worthwhile. The migration framework we’ve provided can help any business break down the process into manageable stages and move fully to the cloud.

When you’re ready to take the first step, we’re here to help to make the process even easier. Contact a 2nd Watch advisor today to get started with your data center migration to the cloud.

 

Cloud Migration Challenges: 6 Reasons the Cloud Might Not be What You Think it Is

A lot of enterprises migrate to the public cloud because they see everyone else doing it. And while you should stay up on the latest and greatest innovations – which often happen in the cloud – you need to be aware of the realities of the cloud and understand different cloud migration strategies. You need to know why you’re moving to the cloud. What’s your goal? And what outcomes are you seeking? Make sure you know what you’re getting your enterprise into before moving forward in your cloud journey.

1. Cloud technology is not a project, it’s a constant

Be aware that while there is a starting point to becoming more cloud native – the migration – there is no stopping point. The migration occurs, but the transformation, development, innovation, and optimization is never over.

There are endless applications and tools to consider, your organization will evolve over time, technology changes regularly, and user preferences change even faster. Fueled by your new operating system, cloud computing puts you into continuous motion. While continuous motion is positive for outcomes, you need to be ready to ride the wave regardless of where it goes. Once you get on, success requires that you stay there.

2. Flex-agility is necessary to survival

Flexibility + agility = flex-agility, and you need it in the cloud. Flex-agility enables enterprises to adapt to the risks and unknowns occurring in the world. The pandemic continues to highlight the need for flex-agility in business. Organizations further along in their cloud journeys were able to quickly establish remote workforces, adjust customer interactions, communicate completely and effectively, and ultimately, continue running. While the pandemic was unprecedented, more commonly, flex-agility is necessary in natural disasters like floods, hurricanes, and tornadoes; after a ransomware or phishing attack; or when an employee’s device is lost, stolen, or destroyed.

3. You still have to move faster than the competition

Gaining or maintaining your competitive edge in the cloud has a lot to do with speed. Whether it’s the dog-eat-dog nature of your industry, macroeconomics, or a political environment, these are the things that speed up innovation. You might not have any control over these things, but they’re shaping the way consumers interact with brands. Again, when you think about how the digital transformation evolved during the pandemic, you saw winning business move the fastest. The cloud is an amazing opportunity to meet all the demands of your environment, but if you’re not looking forward, forecasting trends, and moving faster than the competition, you could fall behind.

4. People are riskier than technology

In many ways, the technology is the easiest part of an enterprise cloud strategy. It’s the people where a lot of risk comes into play. You may have a great strategy with clean processes and tactics, but if the execution is poor, the business can’t succeed. A recent survey revealed that 85% of organizations report deficits in cloud expertise, with the top three areas being cloud platforms, cloud native engineering, and security. While business owners acknowledge the importance of these skills, they’re still struggling to attract the caliber of talent necessary.

In addition to partnering with cloud service experts to ensure a capable team, organizations are also reinventing their technical culture to work more like a startup. This can incentivize the cloud-capable with hybrid work environments, an emphasis on collaboration, use of the agile framework, and fostering innovation.

5. Cost-savings is not the best reason to migrate to the cloud

Buy-in from executives is key for any enterprise transitioning to the cloud. Budget and resources are necessary to continue moving forward, but the business value of a cloud transformation isn’t cost savings. Really, it’s about repurposing dollars to achieve other things. At the end of the day, companies are focused on getting customers, keeping customers, and growing customers, and that’s what the cloud helps to support.

By innovating products and services in a cloud environment, an organization is able to give customers new experiences, sell them new things, and delight them with helpful customer service and a solid user experience. The cloud isn’t a cost center, it’s a business enabler, and that’s what leadership needs to hear.

6. Cloud migration isn’t always the right answer

Many enterprises believe that the process of moving to the cloud will solve all of their problems. Unfortunately, the cloud is just the most popular technology operating system platform today. Sure, it can help you reach your goals with easy-to-use functionality, automated tools, and modern business solutions, but it takes effort to utilize and apply those resources for success.

For most organizations, moving to the cloud is the right answer, but it could be the wrong time. The organization might not know how it wants to utilize cloud functionality. Maybe outcomes haven’t been identified yet, the business strategy doesn’t have buy-in from leadership, or technicians aren’t aware of the potential opportunities. Another issue stalling cloud migration is internal cloud-based expertise. If your technicians aren’t cloud savvy enough to handle all the moving parts, bring on a collaborative cloud advisor to ensure success.

Ready for the next step in your cloud journey?

Cloud Advisory Services at 2nd Watch provide you with the cloud solution experts necessary to reduce complexity and provide impartial guidance throughout migration, implementation, and adoption. Whether you’re just curious about the cloud, or you’re already there, our advanced capabilities support everything from platform selection and cost modeling, to app classification, and migrating workloads from your on-premises data center. Contact us to learn more!

Lisa Culbert, Marketing

2nd Watch Uses Redshift to Improve Client Optimization

Improving our use of Redshift: Then and now

Historically, and common among enterprise IT processes, the 2nd Watch optimization team was pulling in cost usage reports from Amazon and storing them in S3 buckets. The data was then loaded into Redshift, Amazon’s cloud data warehouse, where it could be manipulated and analyzed for client optimization. Unfortunately, the Redshift cluster filled up quickly and regularly, forcing us to spend unnecessary time and resources on maintenance and clean up. Additionally, Redshift requires a large cluster to work with, so the process for accessing and using data became slow and inefficient.

Of course, to solve for this we could have doubled the size, and therefore the cost, of our Redshift usage, but that went against our commitment to provide cost-effective options for our clients. We also could have considered moving to a different type of node that is storage optimized, instead of compute optimized.

Lakehouse Architecture for speed improvements and cost savings

The better solution we uncovered, however, was to follow the Lakehouse Architecture pattern to improve our use of Redshift to move faster and with more visibility, without additional storage fees. The Lakehouse Architecture is a way to strike a balance between cost and agility by selectively moving data in and out of Redshift depending on the processing speed needed for the data. Now, after a data dump to S3, we use AWS Glue crawlers and tables to create external tables in the Glue Data Catalogues. The external tables or schemas are linked to the Redshift cluster, allowing our optimization team to read from S3 to Redshift using Redshift Spectrum.

Our cloud data warehouse remains tidy without dedicated clean-up resources, and we can query the data in S3 via Redshift without having to move anything. Even though we’re using the same warehouse, we’ve optimized its use for the benefit of both our clients and 2nd Watch best practices. In fact, our estimated savings are $15,000 per month, or 100% of our previous Redshift cost.

How we’re using Redshift today

With our new model and the benefits afforded to clients, 2nd Watch is applying Redshift for a variety of optimization opportunities.

Discover new opportunities for optimization. By storing and organizing data related to our clients’ AWS, Azure, and/or Google Cloud usage versus spend data, the 2nd Watch optimization team can see where further optimization is possible. Improved data access and visibility enables a deeper examination of cost history, resource usage, and any known RIs or savings plans.

Increase automation and reduce human error. The new model allows us to use DBT (data build tool) to complete SQL transforms on all data models used to feed reporting. These reports go into our dashboards and are then presented to clients for optimization. DBT empowers analysts to transform warehouse data more efficiently, and with less risk, by relying on automation instead of spreadsheets.

Improve efficiency from raw data to client reporting. Raw data that lives in a data lake in s3 is transformed and organized into a structured data lake that is prepared to be defined in AWS Glue Catalog tables. This gives the analysts access to query the data from Redshift and use DBT to format the data into useful tables. From there, the optimization team can make data-based recommendations and generate complete reports for clients.

In the future, we plan on feeding a power business intelligence dashboard directly from Redshift, further increasing efficiency for both our optimization team and our clients.

Client benefits with Redshift optimization

  • Cost savings: Only pay for the S3 storage you use, without any storage fees from Redshift.
  • Unlimited data access: Large amounts of old data are available in the data lake, which can be joined across tables and brought into Redshift as needed.
  • Increased data visibility: Greater insight into data enables us to provide more optimization opportunities and supports decision making.
  • Improved flexibility and productivity: Analysts can get historical data within one hour, rather than waiting 1-2 weeks for requests to be fulfilled.
  • Reduced compute cost: By shifting the compute cost of loading data into to Amazon EKS.

-Spencer Dorway, Data Engineer

2nd Watch Enhances Managed Optimization service in partnership with Spot by NetApp

Today, we’re excited to announce a new enhancement to our Managed Optimization service – Spot Instance and Container Optimization – for enterprise IT departments looking to more thoughtfully allocate cloud resources and carefully manage cloud spend.

Enterprises using cloud infrastructure and services today are seeing higher cloud costs than anticipated due to factors such as cloud sprawl, shadow IT, improper allocation of cloud resources, and a failure to use the most efficient resource based on workload. To address these concerns, we take a holistic approach to Optimization and have partnered with Spot by NetApp to enhance our Managed Optimization service.

The service works by recommending workloads that can take advantage of the cost savings associated with running instances, VMs and containers on “spot” resources. A spot resource is an unused cloud resource that is available for sale in a marketplace for less than the on-demand price. Because spot resources enable users to request unused EC2 instances or VMs to run their workloads at steep discounts, users can significantly lower their cloud compute costs, up to 90% by some measures. To deliver its service, we’re partnering with Spot, whose cloud automation and optimization solutions help companies maximize return on their cloud investments.

“Early on, spot resources were difficult to manage, but the tasks associated with managing them can now be automated, making the use of spot a smart approach to curbing cloud costs,” says Chris Garvey, EVP of Product at 2nd Watch. “Typically, non-mission critical workloads such as development and staging have been able to take advantage of the cost savings of spot instances.

By combining 2nd Watch’s expert professional services, managed cloud experience and solutions from Spot by NetApp, 2nd Watch has been able to help companies use spot resources to run production environments.”

“Spot by NetApp is thrilled to be working with partners like  2nd Watch to help customers maximize the value of their cloud investment,” says Amiram Shachar, Vice President and General Manager of Spot by NetApp.  “Working together, we’re helping organizations go beyond one-off optimization projects to instead ensure continuous optimization of their cloud environment using Spot’s unique technology. With this new offering, 2nd Watch demonstrates a keen understanding of this critical customer need and is leveraging the best technology in the market to address it.”

You’re on AWS. Now What? 5 Strategies to Increase Your Cloud’s Value

Now that you’ve migrated your applications to AWS, how can you take the value of being on the cloud to the next level? To provide guidance on next steps, here are 5 things you should consider to amplify the value of being on AWS.

Top 10 Cloud Optimization Best Practices

Cloud optimization is a continuous process specific to a company’s goals, but there are some staple best practices all optimization projects should follow. Here are our top 10.

1. Begin with the end in mind.

Business leaders and stakeholders throughout the organization should know exactly what they’re trying to achieve with a cloud optimization project. Additionally, this goal should be revisited on a regular basis to make sure you remain on track to achievement. Create measures to gauge success at different points and follow the agreed upon order of operations to complete the process.

2. Create structure around governance and responsibility.

Overprovisioning is one of the most common issues adding unnecessary costs to your bottom line. Implement specific and regulated structure around governance and responsibility for all teams involved in optimization to control any unnecessary provisioning. Check in regularly to make sure teams are following the structure and you only have the tools you need and are actively using.

3. Get all the data you need.

Cloud optimization is a data-driven exercise. To be successful, you need insight into a range of data pieces. Not only do you need to identify what data you need and be able to get it, but you also need to know what data you’re missing and figure out how to get it. Collaborate with internal teams to make sure essential data isn’t siloed or already being collected. Additionally, regularly clean and validate data to ensure reliability for data-based decision making.

4. Implement tagging practices.

To best utilize the data you have, organizing and maintaining it with strict tagging practices in necessary. Implement a system that works from more than just a technical standpoint. You can also use tagging to launch instances, control your auto parking methodology, or in scheduling. Tagging helps you understand the data and see what is driving spend. Whether it’s an environment tag, owner tag, or application tag, tagging provides clarity into spend, which is the         key to optimization.

5. Gain visibility into spend.

Tagging is one way to see where your spend is going, but it’s not the only way required. Manage accounts regularly to make sure inactive accounts aren’t continuing to be billed. Set up an internal mechanism to review with your app teams and hold them accountable. It can be as simple as a dashboard with tagging grading, as long as it lets the data speak for itself.

6. Hire the right technical expertise.

Get more out of your optimization with the right technical expertise on your internal team. Savvy technicians should work alongside the business teams to drive the goals of optimization throughout the process. Without collaboration between these departments, you risk moving in differing directions with multiple end goals in mind. For example, one team might be acting with performance or a technical aspect in mind without realizing the implication on optimization. Partnering with optimization experts can also keep teams aligned and moving toward the same goal.

7. Select the right tools and stick with them.

Tools are a part of the optimization process, but they can’t solve problems alone. Additionally, there are an abundance of tools to choose from, many of which have similar functionality and outcomes. Find the right tools for your goals, facilitate adoption, and give them the time and data necessary to produce results. Don’t get distracted by every new, shiny tool available and the “tool champions” fighting for one over another. Avoid the costs of overprovisioning by checking usage regularly and maintaining the governance structure established throughout your teams.

8. Make sure your tools are working.

Never assume a tool or a process you’ve put in place is working. In fact, it’s better to assume it’s not working and consistently check its efficiency. This regular practice of confirming the tools you have are both useful and being used will help you avoid overprovisioning and unnecessary spending. For tools to be effective and serve their purpose, you need enough visibility to determine how the tool is contributing to your overall end goal.

9. Empower someone to drive the process.

The number one call to action for anyone diving into optimization is to appoint a leader. Without someone specific, qualified, and active in managing the project with each stakeholder and team involved, you won’t accomplish your goals. Empower this leader internally to gain the respect and attention necessary for employees to understand the importance of continuous optimization and contribute on their part.

10. Partner with experts.

Finding the right partner to help you optimize efficiently and effectively will make the process easier at every turn. Bringing in an external driver who has the know-how and experience to consult on strategy through implementation, management, and replication is a smart move with fast results.

2nd Watch takes a holistic approach to cloud optimization with a team of experienced data scientists and architects who help you maximize performance and returns on your cloud assets. Are you ready to start saving? Let us help you define your optimization strategy to meet your business needs and maximize your results. Contact Us to take the next step in your cloud journey.

-Willy Sennott, Optimization Practice Manager

Steps to Continuous Cloud Optimization

Cloud optimization is an ongoing task for any organization driven by data. If you don’t believe you need to optimize, or you’re already optimized, you may not have the data necessary to see where you’re over-provisioned and losing spend. Revisit the optimization pillars frequently to best evolve with and take advantage of everything the cloud has to offer.

Begin with the end in mind

The big question is, where are you trying to go? This question should constantly be revisited with internal stakeholders and business leaders. Define the process that will get you there and follow the order of operations identified to reach your optimization goal. Losing sight of the purpose, getting caught up in shiny new tools, or failing to incorporate the right teams could lead you off path.

Empower someone to drive the process

This is pivotal because without this appointed person, cloud optimization will not happen. Give someone the power to drive optimization policies throughout the organization. Companies most successful in achieving optimization have a good internal mandate to make it a priority. When messages come from the top, and are enforced through a project champion, people tend to pay attention and management is much more effective.

Fill the data gaps

Cloud optimization is a data driven exercise, so you need all the data you can get to make it valuable. Your tools will be much more compelling when they have the data necessary to make smart recommendations. Understand where to get the data in your organization, and figure out how to get any data you don’t have. Verify your data regularly to confirm accuracy for intelligent decision making geared toward optimization.

Implement tagging practices

The practice of not only implementing, but also actively enforcing your tagging policies, drives optimization. Be it an environment tag, owner tag, or application tag, tags help you understand your data and what or who is driving spend.

Enforce accountability

While lack of tagging and data gaps prevent visibility, overprovisioning is also an accountability issue. Just look at the hundred plus AWS services alone that show up on a bill for an organization that’s a long-time user. It’s not uncommon for 20-30% of the total to be attributed to services they never even knew existed at the time they migrated to the cloud.

Hold your app teams accountable with an internal mechanism that lets the data speak for itself. It can be as simple as a dashboard with tagging grading, because everybody understands those results.

Rearchitect and refactor

Migrating to the cloud via a lift and shift can be a valuable strategy for certain organizations. However, after a few months in the cloud, you need to intentionally move forward with the next steps. Reevaluating, refactoring and rearchitecting will occur multiple times along the way. Without them, you end up spending more money than necessary.

Continuous optimization is a must

Optimization is not a one and done project because the possibilities are constantly evolving. Almost every day, a new technology is introduced. Maybe it’s a new instance family or tool. A couple years ago it was containers, and before that it was serverless. Being aware of these new and improved technologies is key to maintaining continuous optimization.

Engage with an experienced partner

There are a lot of factors to consider, evaluate, and complete as part of your cloud optimization practice. To maximize your optimization efforts, you want someone experienced to guide your strategy.

One benefit to partnering with an optimization expert, like 2nd Watch, is that an external partner can diffuse the internal conflicts typically associated with optimization. So much of the process is navigating internal politics and red tape. A partner helps meld the multiple layers of your business with a holistic approach that ensures your cloud is running as efficiently as possible.

-Willy Sennott, Optimization Practice Manager

You’re on AWS, now what? Five things you should consider now.

You migrated your applications to AWS for a reason. Maybe it was for the unlimited scalability, powerful computing capability, ease and flexibility of deployment, movement from CapEx to OpEx model, or maybe it was simply because the boss told you to. However you got there, you’re there. So, what’s next? How do you take advantage of your applications and data that reside in AWS? What should you be thinking about in terms of security and compliance? Here are 5 things you should consider in order to amplify the value of being on AWS:

  1. Create competitive advantage from your AWS data
  2. Accelerate application development
  3. Increase the security of your AWS environment
  4. Ensure cloud compliance
  5. Reduce cloud spend without reducing application deployment

Create competitive advantage from your data

You have a wealth of information in the form of your AWS datasets. Finding patterns and insights not just within these datasets, but across all datasets is key to using data analysis to your advantage. You need a modern, cloud-native data lake.

Data lakes, though, can be difficult to implement and require specialized, focused knowledge of data architecture. Utilizing a cloud expert can help you architect and deploy a data lake geared toward your specific business needs, whether it’s making better-informed decisions, speeding up a process, reducing costs or something else altogether.

Download this datasheet to learn more about transforming your data analytics processes into a flexible, scalable data lake.

Accelerate application development

If you arrived at AWS to take advantage of the rapid deployment of infrastructure to support development, you understand the power of bringing applications to market faster. Now may be the time to fully immerse your company in a DevOps transformation.

A DevOps Transformation involves adopting a set of cultural values and organizational practices that improve business outcomes by increasing collaboration and feedback between business stakeholders, Development, QA, IT Operations, and Security. This includes an evolution of your company culture, automation and tooling, processes, collaboration, measurement systems, and organizational structure—in short, things that cannot be accomplished through automation alone.

To learn more about DevOps transformation, download this free eBook about the Misconceptions and Challenges of DevOps Transformation.

Increase the security of your AWS environment

How do you know if you’re AWS environment is truly secure? You don’t, unless you deploy a comprehensive security assessment of your AWS environment that measures your environment against the latest industry standards and best practices. This type of review provides a list of vulnerabilities and actionable remediations, an evaluation of your Incident Response Policy, and a comprehensive consultation of the system issues that are causing these vulnerabilities.

To learn more, review this Cloud Security Rapid Review document and learn how to gain protection from immediate threats.

Ensure cloud compliance

Deploying and managing cloud infrastructure requires new skills, software and management to maintain regulatory compliances within your organization. Without the proper governance in place, organizations can be exposed to security vulnerabilities and potentially compromise confidential information.

A partner like 2nd Watch can be a great resource in this area. The 2nd Watch Compliance Assessment and Remediation service is designed to evaluate, monitor, auto-remediate, and report on compliance of your cloud infrastructure, assessing industry standard policies including CIS, GDPR, HIPAA, NIST, PCI-DSS, and SOC2.

Download this datasheet to learn more about our Compliance Assessment & Remediation service.

Reduce cloud spend without reducing application deployment

Need to get control of your cloud spend without reducing the value that cloud brings to your business? This is a common discussion we have with clients. To reduce your cloud spend without decreasing the benefits of your cloud environment, we recommend examining the Pillars of Cloud Cost Optimization to prevent over-expenditure and wasted investment. The pillars include:

  • Auto-parking and on-demand services
  • Cost models
  • Rightsizing
  • Instance family / VM type refresh
  • Addressing waste
  • Shadow IT

For organizations that incorporate cloud cost optimization into their cloud infrastructure management, significant savings can be found, especially in larger organizations with considerable cloud spend.

Download our A Holistic Approach to Cloud Cost Optimization eBook to learn more.

After you’ve migrated to AWS, the next logical step in ensuring IT satisfies corporate business objectives is knowing what’s next for your organization in the cloud. Moving to the cloud was the right decision then and can remain the right decision going forward. Implement any of the five recommendations and accelerate your organization forward.

-Michael Elliott, Sr Director of Product Marketing

Cloud for Advanced Users – The 5 Most Important Lessons Learned Over a Decade

Being involved in cloud services and working closely with cloud providers over the past 10 years has given us a great deal of insight into the triumphs and pitfalls of cloud consumers. We’ve distilled that vast experience and come up with our list of the 5 most important lessons we’ve learned over the past decade for users that are experienced in the cloud with multiple applications/workloads running.

1. Governance – Tagging, Tools, and Automation

Many of our customers have hundreds, if not thousands of accounts, and we’ve helped them solve many of their governance challenges. One challenge is ensuring they’re not doing certain things – for example, shadow IT and functioning in siloes. In the cloud, you want everyone to have visibility into best practices and understanding the critical role cloud plays in creating business value.

There are numerous tools and automation methods you can leverage to ensure your governance is in step with the latest innovation. First and foremost, a strong tagging strategy is critical. As with shadow IT, if you don’t tag things correctly, your teams can spin up resources with limited visibility on who owns them, continuously running and accumulating expenses over time. If you don’t start with a tagging strategy from day one, retroactively correcting is a herculean task. Starting with a strong architectural foundation and making sure that foundation stays in place with the proper tools will ensure governance doesn’t become a burden.

Putting the proper guardrails in place for this, such as AWS Config, can help overcome this challenge and make sure everybody’s following the rules. Sometimes governance and moving fast can seem like adversaries, but automation can help satisfy both.

2. Optimization – It’s not a one-time exercise

Cloud users tend to think of optimization in terms of Reserved Instances (RI), but it reaches far beyond just RIs. Well-defined policies must exist to exhibit control over spend and discipline to go along with policies.

There are many ways to leverage cloud native solutions and products to achieve optimization as well as new classes of service. One key point is leveraging the right resources where appropriate. As new services come out and skills increase within organizations, the opportunity to not only optimize spend but optimize the applications themselves by leveraging more cloud native services will continue to drive down operating cost.

Optimization is not a one-time exercise, either. It’s an ongoing practice that needs to be done on a regular basis. Like cleaning out the garage, you need to maintain it. Who’s responsible for this? Often, it’s your company’s Cloud Center of Excellence, or a partner like 2nd Watch.

3. Cloud Center of Excellence – Be bold and challenge the norm

We encourage all organizations to form a Cloud Center of Excellence (CCoE). Typically lead by an executive, your CCoE should be a multi-stakeholder organization that includes representatives from all areas of the business. With the multi-skilled group, you benefit from subject matter experts across a wide variety of areas within your organization who collectively become subject matter experts in cloud services and solutions. When you break down siloes, you’re able to move rapidly.

Your CCoE should be formed at the beginning of your migration and continue to revisit new capabilities released in the cloud on an ongoing basis, updating the organization’s standards to ensure enforcement.

One of the CCoE’s biggest roles is evangelizing within the organization to ensure people are embracing the cloud and celebrating successes, whether it comes from implementing DevOps with cloud native tools or optimizing and cloud refactoring. The CCoE’s motto should is, ‘Be bold, challenge the norm, look for new ways of doing things, and celebrate BIG.’

4. Multi-Cloud – Get out of your comfort zone

As an advanced user, you have grown up with AWS and have a solid understanding and background of AWS. You’ve learned all the acronyms for AWS and understand the products and services. But now you’re being asked to integrate another CSP provider you might not be as familiar with. How do you take that basic cloud knowledge and transition to Azure or GCP?

There’s a little bit of a learning curve, so we recommend taking a training course. Some even offer training based upon your knowledge of AWS. For example, GCP offers training for AWS professionals. Training can help you acclimate to the nomenclature and technology differences between CSPs.

We typically see customers go deep with one cloud provider, and that tends to be where most workloads reside. This can be for financial reasons or due to skills and experience. You get a greater discount when you push more things into one CSP. However, some solutions fit better in one CSP over the other. To maximize your cloud strategy, you need to break down walls, get out of your comfort zone, and pursue the best avenue for the business.

5. Talent – Continuously sharpen the knife’s edge

Talent is in high demand, so it can be challenging to attract the top talent. One way to overcome this is to develop talent internally. All cloud providers offer certifications, and incentivizing employees to go out there and get those certifications goes a long way. With that, success breeds success. Celebrate and evangelize early wins!

The cloud changes fast, so you need to continuously retrain and relearn. And as a bonus – those individuals that are involved in the CCoE have the unique opportunity to learn and grow outside of their area of expertise, so proactively volunteer to be a part of that group.

If you want more detailed information in any of these five areas, we have a wealth of customer examples we’d love to jump into with you. Contact us to start the conversation.

-Ian Willoughby, Chief Architect and Skip Barry, Executive Cloud Enablement Director