1-888-317-7920 info@2ndwatch.com

5 Best Practices for Managing the Complexities of a Hybrid Cloud Strategy

Hybrid cloud strategies require a fair amount of effort and knowledge to construct, including for infrastructure, orchestration, application, data migration, IT management, and potential issues related to silos. There are a number of complexities to consider to enable seamless integration of a well-constructed hybrid cloud strategy. We recommend employing these 5 best practices as you move toward a multi-cloud or hybrid cloud architecture to ensure a successful transition.

Utilize cloud management tools.

Cloud management providers have responded to the complexities of a hybrid strategy with an explosion of cloud management tools. These tools can look at your automation and governance, lifecycle management, usability, access and more, and perform many tasks with more visibility.

Unique tooling for each cloud provider is especially important. Some partners may recommend a single pane of glass for simplicity, but that can be too simple for service catalogues and when launching new resources. The risk with going too simplistic is missing the opportunity to take advantage of the best aspects of each cloud.

Complete a full assessment of applications and dependencies first.

Before you jump into a hybrid cloud strategy, you need to start with a full assessment of your applications and dependencies. A common misstep is moving applications to the public cloud, while keeping your database in your private cloud or on-prem datacenter. The result is net latency drag, leading to problems like slow page loads and videos that won’t play.

Mapping applications and dependencies to the right cloud resource prior to migration gives you the insight necessary for a complete migration with uninterrupted performance. Based on the mapping, you know what to migrate when, with full visibility into what will be impacted by each. This initial step will also help with cloud implementation and hybrid connect down the line.

Put things in the right place.

This might sound obvious, but it can be challenging to rationalize where to put all your data in a hybrid environment. Start by using the analysis of your applications and dependencies discussed above. The mapping provides insight into traffic flows, networking information, and the different types of data you’re dealing with.

A multi-cloud environment is even more complex with cost implications and networking components. On-prem skills related to wide area network (WAN) connectivity are still necessary as you consider how to monitor the traffic – ingress, egress, east, and west.

Overcome silos.

Silos can be found in all shapes and sizes in an organization, but one major area for silos is in your data. Data is one of the biggest obstacles to moving to the cloud because of the cost of moving it in and out and accessing it. The amount of data you have impacts your migration strategy significantly, so it’s critical to have a clear understanding of where data may be siloed.

Every department has their own data, and all of it must be accounted for prior to migrating. Some data silo issues can be resolved with data lakes and data platforms, but once you realize silos exist, there’s an opportunity to break them down throughout the organization.

An effective method to breaking down silos is by getting buy-in from organizational leaders to break the cultural patterns creating silos in the first place. Create a Cloud Center of Excellence (CCoE) during your cloud transformation to understand and address challenges within the context of the hybrid strategy across the organization.

Partner with proven experts.

Many companies have been successful in their hybrid cloud implementation by leveraging a partner for some of the migration, while their own experts manage their internal resources. With a partner by your side, you don’t have to invest in the initial training of your staff all at once. Instead, your teams can integrate those new capabilities and skills as they start to work with the cloud services, which typically increases retention, reduces training time, and increases productivity.

Partners will also have the knowledge necessary to make sure you not only plan but implement and manage the hybrid architecture for overall efficiency. When choosing a partner, make sure they’ve proven the value they can bring. For instance, 2nd Watch is one of only five VMware Cloud on AWS Master Services Competency holders in the United States. That means we have the verified experience to understand the complexities of running a hybrid VMware Cloud implementation.

If you’re interested in learning more about the hybrid cloud consulting and management solutions provided by 2nd Watch, Contact Us to take the next step in your cloud journey.

-Dusty Simoni, Sr Product Manager, Hybrid Cloud

Facebooktwitterlinkedinmailrss

Optimizing your environment using AWS Savings Plans

Surprisingly, AWS has very quietly released a major enhancement/overhaul to purchasing compute resources up front. To date, purchasing Reserved Instances (Standard or Convertible) has offered AWS users great savings for their static workloads. This works because static workloads tend to utilize a set number of resources and RIs are paid for in advance, thereby justifying the financial commitment. That said, how often do today’s business needs remain constant, particularly with today’s product development? So, until now, you had two choices if you couldn’t use your RIs: take the loss and let the RI term run out or undertake the hassle of selling it on the marketplace (potentially for a loss). AWS Savings Plans, on the other hand, provide a gigantic leap forward in solving this problem. In fact, you will find that these AWS Savings Plans provide far more flexibility and return for your investment than the standard RI model.

Here is the gist of the AWS Savings Plans program, taken from the AWS site:

AWS Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate usage.

AWS Savings Plans offer significant savings over On Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one- or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer. (Jeff Barr, AWS, 11.06.2019)

This is HUGE for AWS clients, because now, for the first time ever, savings can also be applied to workloads that leverage serverless containers—as well as traditional EC2 instances!

Currently there are two AWS Savings Plans, and here’s how they compare:

EC2 Instance Savings Plan Compute Savings Plan
Offers discount levels up to 72% off on-demand rates (same as RIs). Offers discount levels up to 66% off on-demand rates (the same rate as Convertible RIs).
Any changes in instances are restricted to the same AWS region. Spans regions. This could be a huge draw for companies with need for regional or national coverage.
Restricts EC2 instance types to the same family, but allows change in instance size and OS (e.g., t3.medium to t3.2xlarge). More flexible. Does not limit EC2 instance families or OS, and therefore, you are no longer locked into a specific instance family at the moment of purchase, as you would be with a traditional RI.
EC2 instances only: Similar to convertible RIs, this plan allows you to increase instance size, with a new twist: you can also reduce instance size! Yes, this means you may no longer have to sell your unused RIs on the marketplace! Allows clients to mix-and-match AWS products, such as EC2 and Fargate; extremely beneficial for clients who use a range of environments for their workloads.
BOTTOM LINE: Slightly less flexible, but you garner a greater discount. BOTTOM LINE: More flexible, but with less of a discount.

As with standard RI purchases, understanding your workloads will be key to determining when to use AWS Savings Plans vs. standard RIs (RIs aren’t going anywhere, but we recommend that Savings Plans be used in place of RIs moving forward) vs. On-Demand (including analysis of potential savings from auto-parking, seasonality, elasticity, and so on). Sound a bit overwhelming? Fear not! This is where 2nd Watch’s Cloud Optimization service excels! Enrollment starts with a full analysis of your organization’s usage, AWS environment, and any other requirements/restrictions your organization may have. The final result is a detailed report, expertly determined by our AWS-certified optimization engineers, with our savings findings and recommendations—customized just for you!

Due to the nature of AWS Savings Plans, they will bring the most immediate value to clients who are either new to AWS or don’t have any RI commitments currently on their account. This is due to the fact that AWS Savings Plans cannot, unfortunately, replace existing RI purchases. Whatever your goals, our optimization experts are ready to help you plan the most strategically efficient and cost effective “next step” of your cloud transformation.

And that’s just the beginning

If you think that AWS Savings Plans may benefit your new or existing AWS deployment, contact us to jumpstart an analysis.

-Jeff Collins, Cloud Optimization Product Management

Facebooktwitterlinkedinmailrss

5 Steps to Cloud Cost Optimization: Hurdles to Optimization are Organizational, Not Technical

In my last blog post, I covered the basics of cloud cost optimization using the Six Pillars model, and focused on the ‘hows’ of optimization and the ‘whys’ of its importance. In this blog, I’d like to talk about what comes next: preparing your organization for your optimization project. The main reason most clients delay and/or avoid confronting issues regarding cloud optimization is because it’s incredibly complex. Challenges from cloud sprawl to misaligned corporate priorities can cause a project to come to a screeching halt. Understanding the challenges before you begin is essential to getting off on the right foot. Here are the 5 main challenges we’ve seen when implementing a cloud cost optimization project:

  • Cloud sprawl refers to the unrestricted, unregulated creation and use of cloud resources; cloud cost sprawl, therefore, refers to the costs incurred related to the use of each and every cloud resource (i.e., storage, instances, data transfer, etc.). This typically presents as decentralized account or subscription management.
  • Billing complexity, in this case, specifically refers to the ever-changing and variable billing practices of cloud providers and the invoices they provide you. Considering all possible variable configurations when creating many solutions across an organization, Amazon Web Services (AWS) alone has 500,000 plus SKUs you could see on any single invoice. If you cannot make sense of your bill up front, your cost optimization efforts will languish.
  • Lack of Access to Data and Application Metrics is one of the biggest barriers to entry. Cost optimization is a data driven exercise. Without billing data and application metrics over time, many incorrect assumptions end up being made resulting in higher cost.
  • Misaligned policies and methods can be the obstacle that will make or break your optimization project. When every team, organization or department has their own method for managing cloud resources and spend, the solution becomes more organizational change and less technology implementation. This can be difficult to get a handle on, especially if the teams aren’t on the same page with needing to optimize.
  • A lack of incentives may seem surprising to many, after all who doesn’t want to save money, however it is the number one blocker in large enterprises that we have experienced toward achieving optimization end goals. Central IT is laser focused on cost management and application/business units are focused more on speed and innovation. Both goals are important, but without the right incentives, process, and communication this fails every time. Building executive support to directly reapply realized optimization savings back to the business units to increase their application and innovation budgets is the only way to bridge misaligned priorities and build the foundation for lasting optimization motivation.

According to many cloud software vendors, waste accounts for 30% to 40% of all cloud usage. In the RightScale State of the Cloud Report 2019, a survey revealed that 35% of cloud spend is wasted. 2nd Watch has found that within large enterprise companies, there can be up to 70% savings through a combination of software and services.  It often starts by just implementing a solid cost optimization methodology.

When working on a project for cloud cost optimization, it’s essential to first get the key stakeholders of an organization to agree to the benefits of optimizing your cloud spend. Once the executive team is onboard and an owner is assigned, the path to optimization is clear covering each of the 6 Pillars of Optimization.

THE PATH TO OPTIMIZATION

STEP ONE – Scope It Out!

As with any project, you first want to identify the goals and scope and then uncover the current state environment. Here are a few questions to ask to scope out your work:

  • Overall Project Goal – Are you focused on cost savings, workload optimization, uptime, performance or a combination of these factors?
  • Budget – Do you want to sync to a fiscal budget? What is the cycle? What budget do you have for upfront payments? Do you budget at an account level or organization level?
  • Current State – What number of instances and accounts do you have? What types of agreements do you have with your cloud provider(s)?
  • Growth – Do you grow seasonally, or do you have planned growth based on projects? Do you anticipate existing workloads to grow or shrink overtime?
  • Measurement – How do you currently view your cloud bill? Do you have detailed billing enabled? Do you have performance metrics over time for your applications?
  • Support – Do you have owners for each application? Are people available to assess each app? Are you able to shutdown apps during off hours? Do you have resources to modernize applications?

STEP TWO – Get Your Org Excited

One of the big barriers to a true optimization is gaining access to data. In order to gather the data (step 3) you first need to get the team onboard to grant you or the optimization project team access to the information.

During this step, get your cross-functional team excited about the project, share the goals and current state info you gathered in the previous step and present your strategy to all your stakeholders.

Stakeholders may include application owners, cloud account owners, IT Ops, IT security and/or developers who will have to make changes to applications.

Remember, data is key here, so find the people who own the data. Those who are monitoring applications or own the accounts are the typical stakeholders to involve. Then share with them the goals and bring them along this journey.

STEP THREE – Gather Your Data

Data is grouped into a few buckets:

  1. Billing Data – Get a clear view of your cloud bill over time.
  2. Metrics Data – CPU, I/O, Bandwidth and Memory for each application over time is essential.
  3. Application Data – Conduct interviews of application owners to understand the nuances. Graph out risk tolerance, growth potential, budget constraints and identify the current tagging strategy.

A month’s worth of data is good, though three months of data is much better to understand the capacity variances for applications and how to project into the future.

STEP FOUR – Visualize and Assess Your Usage

This step takes a bit of skill. There are tools like CloudHealth that can help you understand your cost and usage in cloud. Then there are other tools that can help you understand your application performance over time. Using the data from each of these sources and collaborating them across the pillars of optimization is essential to understanding where you can find the optimal cost savings.

I often recommend bringing in an optimization expert for this step. Someone with a data science, cloud and accounting background can help you visualize data and find the best options for optimization.

STEP FIVE – Plan Your Remediation Efforts and Get to Work!

Now that you know where you can save, take that information and build out a remediation plan. This should include addressing workloads in one or more of the pillars.

For example, you may shut down resources at night for an application and move it to another family of instances/VMs based on current pricing.

Your remediation should include changes by application as well as:

  1. RI Purchase Strategy across the business on a 1 or 3-year plan.
  2. Auto-Parking Implementation to part your resources when they’re not in use.
  3. Right-Sizing based on CPU, memory, I/O.
  4. Family Refresh or movement to the newer, more cost-effective instance families or VM-series.
  5. Elimination of Waste like unutilized instances, unattached volumes, idle load balancers, etc.
  6. Storage reassessment based on size, data transfer, retrieval time and number of retrieval requests.
  7. Tagging Strategy to track each instance/VM and track it back to the right resources.
  8. IT Chargeback process and systems to manage the process.

Remediation can take anywhere from one month to a year’s time based on organization size and the support of application teams to make necessary changes.

Download our ‘5 Steps to Cloud Cost Optimization’ infographic for a summary of this process.

End Result

With as much as 70% savings possible after implementing one of these projects, you can see the compelling reason to start. A big part of the benefits is organizational and long lasting including:

  • Visibility to make the right cloud spending decisions​
  • Break-down of your cloud costs by business area for chargeback or showback​
  • Control of cloud costs while maintaining or increasing application performance​
  • Improved organizational standards to keep optimizing costs over time​
  • Identification of short and long-term cost savings across the various optimization pillars:

Many companies reallocate the savings to innovative projects to help their company grow. The outcome of a well-managed cloud cost optimization project can propel your organization into a focus on cloud-native architecture and application refactoring.

Though complex, cloud cost optimization is an achievable goal. By cross-referencing the 6 pillars of optimization with your organizations policies, applications and teams, you can quickly find savings from 30 – 40% and grow from there.

By addressing project risks like lack of awareness, decentralized account management, lack of access to data and metrics, and lack of clear goals, your team can quickly achieve savings.

Ready to get started with your cloud cost optimization? Schedule a Cloud Cost Optimization Discovery Session for a free 2-hour session with our team of experts.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

The 6 Pillars of Cloud Cost Optimization

Let me start by painting the picture: You’re the CFO. Or the manager of a department, group, or team, and you’re ultimately responsible for any and all financial costs incurred by your team/group/department. Or maybe you’re in IT and you’ve been told to keep a handle on the costs generated by application use and code development resources. Your company has moved some or all of your projects and apps to the public cloud, and since things seem to be running pretty smoothly from a production standpoint, most of the company is feeling pretty good about the transition.

Except you.

The promise of moving to cloud to cut costs hasn’t matriculated and attempting to figure out the monthly bill from your cloud provider has you shaking your head.

Source: Amazon Web Services (AWS). “Understanding Consolidated Bills – AWS Billing and Cost Management”. (2017). Retrieved from https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/con-bill-blended-rates.html

From Reserved Instances and on-demand costs, to the “unblended” and “blended” rates, attempting to even make sense of the bill has you no closer to understanding where you can optimize your spend.

It’s not even just the pricing structure that requires an entire department of accountants to make sense of, the breakdown of the services themselves is just as mind boggling. In fact, there are at least 500,000 SKUs and price combinations in AWS alone! In addition, your team likely has no limitation on who can spin up any specific resource at any time, intrinsically compounding the problem—especially when staff leave them running, the proverbial meter racking up the $$ in the background.

Addressing this complex and ever-moving problem is not, in fact, a simple matter, and requires a comprehensive and intimate approach that starts with understanding the variety of opportunities available for cost and performance optimization. This where 2nd Watch and our Six Pillars of Cloud Optimization come in.

The Six Pillars of Cloud Cost Optimization

  1. Reserved Instances (RIs)

AWS Reserved Instances, Azure Reserved VM Instances, and Google Cloud Committed Use Discounts take the ephemeral out of cloud resources, allowing you to estimate up front what you’re going to use. This also entitles you to steep discounts for pre-planning, which ends up as a great financial incentive.

Most cloud cost optimizations, erroneously, begin and end here—providing you and your organization with a less than optimal solution. Resources to estimate RI purchases are available through cloud providers directly and through 3rd party optimization tools. For example, CloudHealth by VMware provides a clear picture into where to purchase RI’s based on your current cloud use over a number of months and will help you manage your RI lifecycle over time.

Two of the major factors to consider with cloud cost optimization are Risk Tolerance and Centralized RI Management portfolios.

  • Risk Tolerance refers to identifying how much you’re willing to spend up front in order to increase the possibility of future gains or recovered profits. For example, can your organization take a risk and cover 70% of your workloads with RIs? Or do you worry about consumption, and will therefore want to limit that to around 20-30%? Also, how long, in years, are you able to project ahead? One year is the least risky, sure, but three years, while also a larger financial commitment, comes with larger cost savings.
  • Centralized RI Management portfolios allow for deeper RI coverage across organizational units, resulting in even greater savings opportunities. For instance, a single application team might have a limited pool of cash in which to purchase RIs. Alternatively, a centralized, whole organization approach would cover all departments and teams for all workloads, based on corporate goals. This approach, of course, also requires ongoing communication with the separate groups to understand current and future resources needed to create and execute a successful RI management program.

Once you identify your risk tolerance and centralize your approach to RI’s you can take advantage of this optimization option. Though, an RI-only optimization strategy is short-sighted. It only allows you to take advantage of pricing options that your cloud vendor offers. It is important to overlay RI purchases with the 5 other optimization pillars to achieve the most effective cloud cost optimization.

  1. Auto-Parking

One of the benefits of the cloud is the ability to spin up (and down) resources as you need them. However, the downside of this instant technology is that there is very little incentive for individual team members to terminate these processes when they are finished with them. Auto-Parking refers to scheduling resources to shut down during off hours—an especially useful tool for development and test environments. Identifying your idle resources via a robust tagging strategy is the first step; this allows you to pinpoint resources that can be parked more efficiently. The second step involves automating the spin-up/spin-down process. Tools like ParkMyCloud, AWS Instance Scheduler, Azure Automation, and Google Cloud Scheduler can help you manage the entire auto-parking process.

  1. Right-Sizing

Ah, right-sizing, the best way to ensure you’re using exactly what you need and not too little or too much. It seems like a no-brainer to just “enable right-sizing” immediately when you start using a cloud environment. However, without the ability to analyze resource consumption or enable chargebacks, right-sizing becomes a meaningless concept. Performance and capacity requirements for cloud applications often change over time, and this inevitably results in underused and idle resources.

Many cloud providers share best practices in right-sizing, though they spend more time explaining the right-sizing options that exist prior to a cloud migration. This is unfortunate as right-sizing is an ongoing activity that requires implementing policies and guardrails to reduce overprovisioning, tagging resources to enable department level chargebacks, and properly monitoring CPU, Memory and I/O, in order to be truly effective.

Right-sizing must also take into account auto-parked resources and RIs available. Do you see a trend here with the optimization pillars?

  1. Family Refresh

Instance types, VM-series and “Instance Families” all describe methods by which cloud providers package up their instances according to the hardware used. Each instance/series/family offers different varieties of compute, memory, and storage parameters. Instance types within their set groupings are often retired as a unit when the hardware required to keep them running is replaced by newer technology. Cloud pricing changes directly in relationship to this changing of the guard, as newer systems replace the old. This is called Family Refresh.

Up-to-date knowledge of the instance types/families being used within your organization is a vital component to estimating when your costs will fluctuate. Truth be told, though, with over 500,000 SKU and price combinations for any single cloud provider, that task seems downright impossible.

Some tools exist, however, that can help monitor/estimate Family Refresh, though they often don’t take into account the overlap that occurs with RIs—or upon application of any of the other pillars of optimization. As a result, for many organizations, Family Refresh is the manual, laborious task it sounds like. Thankfully, we’ve found ways to automate the suggestions through our optimization service offering.

  1. Waste

Related to the issue of instances running long past their usefulness, waste is prevalent in cloud. Waste may seem like an abstract concept when it comes to virtual resources, but each wasted unit in this case = $$ spent for no purpose. And, when there is no limit to the amount of resources you can use, there is also no incentive to individuals using the resources to self-regulate their unused/under-utilized instances. Some examples of waste in the cloud include:

  • AWS RDSs or Azure SQL DBs without a connection
  • Unutilized AWS EC2s
  • Azure VMs that were spun up for training or testing
  • Dated snapshots that are holding storage space that will never be useful
  • Idle load balancers
  • Unattached volumes

Identifying waste takes time and accurate reporting. It is a great reason to invest the time and energy in developing a proper tagging strategy, however, since waste will be instantly traceable to the organizational unit that incurred it, and therefore, easily marked for review and/or removal. We’ve often seen companies buy RIs before they eliminate waste, which, without fail, causes them to overspend in cloud – for at least a year.

  1. Storage

Storage in the cloud is a great way to reduce on-premises hardware spend. That said, though, because it is so effortless to use, cloud storage can, in a very short matter of time, expand exponentially, making it nearly impossible to predict accurate cloud spend. Cloud storage is usually charged by four characteristics:

  • Size – How much storage do you need?
  • Data Transfer (bandwidth) – How often does your data need to move from one location to another?
  • Retrieval Time – How quickly do you need to access your data?
  • Retrieval Requests – How often do you need to access your data?

There are a variety of options for different use cases including using more file storage, databases, data backup and/or data archives. Having a solid data lifecycle policy will help you estimate these numbers, and ensure you are both right-sizing and using your storage quantity and bandwidth to its greatest potential at all times.

So, you see, each of these six pillars of cloud cost optimization houses many moving parts, and what with public cloud providers constantly modifying their service offerings and pricing, it seems wrangling in your wayward cloud is unlikely. Plus, optimizing only one of the pillars without considering the others offers little to no improvement, and can, in fact, unintentionally cost you more money over time. An efficacious optimization process must take all pillars and the way they overlap into account, institute the right policies and guardrails to ensure cloud sprawl doesn’t continue, and implement the right tools to allow your team regularly to make informed decisions.

The good news is that the future is bright! Once you have completely assessed your current environment, taken the pillars into account, made the changes required to optimize your cloud, and found a method by which to make this process continuous, you can investigate optimization through application refactoring, ephemeral instances, spot instances and serverless architecture.

The promised cost savings of public cloud is reachable, if only you know where to look.

2nd Watch offers a Cloud Cost Optimization service that can help guide you through this process. Our Cloud Cost Optimization service is guaranteed to reduce your cloud computing costs by 20%,* increasing efficiency and performance. Our proven methodology empowers you to make data driven decisions in context, not relying on tools alone. Cloud cost optimization doesn’t have to be time consuming and challenging. Start your cloud cost optimization plan with our proven method for success at https://offers.2ndwatch.com/cloud-cost-optimization-20-offer-2020.

*To qualify for guaranteed 20% savings, must have at least $50,000/month cloud usage.

-Stefana Muller, Sr. Product Manager

Facebooktwitterlinkedinmailrss

Managing Azure Cloud Governance with Resource Policies

I love an all you can eat buffet. One can get a ton of value from a lot to choose from, and you can eat as much as you want or not, for a fixed price.

In the same regards, I love the freedom and vast array of technologies that the cloud allows you. A technological all you can eat buffet, if you will. However, there is no fixed price when it comes to the cloud. You pay for every resource! And as you can imagine, it can become quite costly if you are not mindful.

So, how do organizations govern and ensure that their cloud spend is managed efficiently? Well, in Microsoft’s Azure cloud you can mitigate this issue using Azure resource policies.

Azure resource policies allow you to define what, where or how resources are provisioned, thus allowing an organization to set restrictions and enable some granular control over their cloud spend.

Azure resource policies allow an organization to control things like:

  • Where resources are deployed – Azure has more than 20 regions all over the world. Resource policies can dictate what regions their deployments should remain within.
  • Virtual Machine SKUs – Resource policies can define only the VM sizes that the organization allows.
  • Azure resources – Resource policies can define the specific resources that are within an organization’s supportable technologies and restrict others that are outside the standards. For instance, your organization supports SQL and Oracle databases but not Cosmos or MySQL, resource policies can enforce these standards.
  • OS types – Resource policies can define which OS flavors and versions are deployable in an organization’s environment. No longer support Windows Server 2008, or want to limit the Linux distros to a small handful? Resource policies can assist.

Azure resource policies are applied at the resource group or the subscription level. This allows granular control of the policy assignments. For instance, in a non-prod subscription you may want to allow non-standard and non-supported resources to allow the development teams the ability to test and vet new technologies, without hampering innovation. But in a production environment standards and supportability are of the utmost importance, and deployments should be highly controlled. Policies can also be excluded from a scope. For instance, an application that requires a non-standard resource can be excluded at the resource level from the subscription policy to allow the exception.

A number of pre-defined Azure resource policies are available for your use, including:

  • Allowed locations – Used to enforce geo-location requirements by restricting which regions resources can be deployed in.
  • Allowed virtual machine SKUs – Restricts the virtual machines sizes/ SKUs that can be deployed to a predefined set of SKUs. Useful for controlling costs of virtual machine resources.
  • Enforce tag and its value – Requires resources to be tagged. This is useful for tracking resource costs for purposes of department chargebacks.
  • Not allowed resource types – Identifies resource types that cannot be deployed. For example, you may want to prevent a costly HDInsight cluster deployment if you know your group would never need it.

Azure also allows custom resource policies when you need some restriction not defined in a custom policy. A policy definition is described using JSON and includes a policy rule.

This JSON example denies a storage account from being created without blob encryption being enabled:

{
 
"if": {
 
"allOf": [
 
{
 
"field": "type",
 
"equals": "Microsoft.Storage/ storageAccounts"
 
},
 
{
 
"field": "Microsoft.Storage/ storageAccounts/ enableBlobEncryption",
 
"equals": "false"
 
}
 
]
 
},
 
"then": { "effect": "deny"
 
}
 
}

The use of Azure Resource Policies can go a long way in assisting you to ensure that your organization’s Azure deployments meet your governance and compliance goals. For more information on Azure Resource Policies visit https://docs.microsoft.com/en-us/azure/azure-policy/azure-policy-introduction.

For help in getting started with Azure resource policies, contact us.

-David Muxo, Sr Cloud Consultant

Facebooktwitterlinkedinmailrss

2W Insight Cloud Cost Accounting Tool: New Features

Enabling enterprises to accurately distribute cloud expenses to their unique cost reporting structure

Accurate distribution of cloud costs among business units, applications, projects etc. according to accepted accounting practices is one of the grea challenges facing enterprise IT Managers and Financial Accountants today. 2W Insight 7.0 simplifies cloud cost accounting by enabling enterprises to create an organizational hierarchy of cost centers aligned to their reporting structure, where resources are assigned, budgets are managed and financial reports are published.

Organizational Hierarchy

2W Insight 7.0 enables enterprises to create a multi-level organizational structure of cost centers tailored to their financial reporting requirements. Users create cost centers for each project, applications, workload etc., then map them to the financial reporting structure. Once the cost centers and structure are established, users assign cloud resources (including reserved instances) to the cost centers where the costs are incurred.  2W Insight applies AWS pricing rules to the usage within each cost center.  As you move up the hierarchy of cost centers, 2W Insight combines the usage from the linked (lower level) cost centers and re-applies the AWS pricing rules to the combined usage ensuring pricing is accurate, earned tier discounts are applied and reserved instances savings are optimized.

Example Organizational Reporting Structure:

Insight 7.0
Assigning Resources to Cost Centers

2W Insight 7.0 enables enterprises to deploy various strategies when assigning resources to cost centers. Enterprises that have implemented a strategy where each workload is placed in a separate AWS account can assign an account to a cost center.  When assigned, all usage/cost in the AWS account will be included in the cost center.  For enterprises that have implemented a strategy where a single AWS account includes multiple workloads, 2W Insight enables user to filter the resources in one or multiple accounts (by tag, attribute etc.) to locate and assign resources to cost centers. Once assigned, a rule can be added to automatically assign new resources that meet the filter criteria into the cost center. This provides strict governance and control of the resource assignments and provides accurate financial reporting.  It also ensures that the elastic nature of the cloud (resources coming and going based on demand) are aligned to the enterprises cloud cost accounting policies.

Budget Management and Alerting

Once the organizational structure is created and resources have been assigned to cost centers, it is important to manage the budget for each cost center.  2W Insight allow users to set budgets for each cost center and receive notifications when budgets are at risk.  Users can receive alerts if a single day’s usage exceeds a set daily budget threshold (e.g. if a single days cost is 120% of the daily budget), when the MTD cost exceeds a set monthly budget threshold (e.g. if the month-to-date usage reaches 100% of the monthly budget) or when the month-to-date cost exceeds a set month-to-date budget threshold (MTD cost exceeds MTD budget by 10%). Budget management and alerting ensures you know in advance if your costs are at risk of exceeding budget.

Showback reports

2W Insight comes standard with month-end reports for each of your cost centers.  These “Showback” reports detail the costs associated with each of your cost centers by AWS product, and users can be set up to receive the reports at the end of each month. Once users begin receiving these reports, they become more aware and therefore more responsible for their AWS spend.

2W Insight Cloud Cost Accounting tool is provided at no charge to all of our Managed Cloud Services customers. To receive a demonstration of its capabilities and how 2nd Watch helps our clients manage the complexity of the public cloud, please contact us at insight.support@2ndwatch.com.

-Tim Hill, Product/Program Manager

Facebooktwitterlinkedinmailrss