In April 2017, we sponsored an online survey focused on cloud automation in order to understand if—and how—corporate IT departments are using automation to develop and deliver new workloads and applications. More than 1,000 IT professionals from US companies with at least 1,000 employees participated in the survey. The majority of respondents (56%) said that at least half of their deployment pipelines are now automated, and 63% said they can deploy new applications in less than six weeks.
According to the results of the survey, companies that have embraced cloud automation can deploy new applications and workloads faster and more frequently, while recovering from failures with more agility than organizations that struggle to adopt automated processes, ing and monitoring. Furthermore, per the survey results, 41% of corporate IT departments are producing more than 10 new cloud workloads every year, and 56% have automated at least half of all their artifact creation and deployment pipelines. Another 66% said that at least half of all their quality assessments (lint, unit s, etc.) are automated.
“The survey results reiterate what we’re hearing from clients and prospects: automation, driven by cloud technologies, is critical to the rapid delivery of new workloads and applications,” says Jeff Aden, EVP of Marketing & Strategic Business Development & Co-Founder at 2nd Watch. “Companies are automating everything from artifact creation to deployment pipelines and process, which includes metrics, documentation and data. The result is faster time-to-market for new applications, and less application downtime.”
More survey results:
63% said that deploying new applications takes less than six weeks
44% said that deploying new code to production takes a day or less
54% said they are deploying new code changes at least once a week
50% said it takes a day or less to recover from application failure
55% said they are measuring application quality by ing everything
Download the infographic highlighting the results of the Cloud Automation survey here. For questions about how 2nd Watch can help you embrace cloud automation, please contact us today!
(AKA: Where the heck did ‘terraform remote config’ go?!!!)
If you are working with cloud-based architectures or working in a DevOps shop, you’ve no doubt been managing your infrastructure as code. It’s also likely that you are familiar with tools like Amazon CloudFormation and Terraform for defining and building your cloud architecture and infrastructure. For a good comparison on Amazon CloudFormation and Terraform check out Coin Graham’s blog on the matter: AWS CFT vs. Terraform: Advantages and Disadvantages.
If you are already familiar with Terraform, then you may have encountered a recent change to the way remote state is handled, starting with Terraform v0.9. Continue reading to find out more about migrating Terraform Remote State to a “Backend” in Terraform v.0.9+.
First off… if you are unfamiliar with what remote state is check out this page.
Remote state is a big ol’ blob of JSON that stores the configuration details and state of your Terraform configuration and infrastructure that has actually been deployed. This is pretty dang important if you ever plan on changing your environment (which is “likely”, to put it lightly) and especially important if you want to have more than one person managing/editing/maintaining the infrastructure, or if you have even the most basic rationale as it pertains to backup and recovery.
Terraform supports almost a dozen backend types (as of this writing) including:
Artifactory
Azure
Consul
Etcd
Gcs
Http
Manta
S3
Swift
Terraform Enterprise (AKA: Atlas)
Why not just keep the Terraform state in the same git repo I keep the Terraform code in?
You also don’t want to store the state file in a code repository because it may contain sensitive information like DB passwords, or simply because the state is prone to frequent changes and it might be easy to forget to push those changes to your git repo any time you run Terraform.
So, what happened to terraform remote anyway?
If you’re like me, you probably run the la version of HashiCorp’s Terraform tool as soon as it is available (we actually have a hook in our team Slack channel that notifies us when a new version is released). With the release of Terraform v.0.9 last month, we were endowed with the usual heaping helping of excellent new features and bug-fixes we’ve come to expect from the folks at HashiCorp, but were also met with an unexpected change in the way remote state is handled.
Unless you are religious about reading the release notes, you may have missed an important change in v.0.9 around the remote state. While the release notes don’t specifically call out the removal (not even deprecation, but FULL removal) of the prior method (e.g. with Terraform remote config, the Upgrade Guide specifically calls out the process in migrating from the legacy method to the new method of managing remote state). More specifically, they provide a link to a guide for migrating from the legacy remote state config to the new backend system. The steps are pretty straightforward and the new approach is much improved over the prior method for managing remote state. So, while the change is good, a deprecation warning in v.0.8 would have been much appreciated. At least it is still backwards compatible with the legacy remote state files (up to version 0.10), making the migration process much less painful.
Prior to v.0.9, you may have been managing your Terraform remote state in an S3 bucket utilizing the Terraform remote config command. You could provide arguments like: backend and backend-config to configure things like the S3 region, bucket, and key where you wanted to store your remote state. Most often, this looked like a shell script in the root directory of your Terraform directory that you ran whenever you wanted to initialize or configure your backend for that project.
Something like…
Terraform Legacy Remote S3 Backend Configuration Example
This was a bit clunky but functional. Regardless, it was rather annoying having some configuration elements outside of the normal terraform config (*.tf) files.
Along came Terraform v.0.9
The introduction of Terraform v.0.9 with its new fangled “Backends” makes things much more seamless and transparent. Now we can replicate that same remote state backend configuration with a Backend Resource in a Terraform configuration like so:
> terraform init
Downloading modules (if any)...
Initializing the backend...
New backend configuration detected with legacy remote state!
Terraform has detected that you're attempting to configure a new backend.
At the same time, legacy remote state configuration was found. Terraform will
first configure the new backend, and then ask if you'd like to migrate
your remote state to the new backend.
Do you want to copy the legacy remote state from "s3"?
Terraform can copy the existing state in your legacy remote state
backend to your newly configured backend. Please answer "yes" or "no".
Enter a value: no
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
6. Verify the new state is copacetic
> terraform plan
...
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
7. Commit and push
In closing…
Managing your infrastructure as code isn’t rocket science, but it also isn’t trivial. Having a solid understanding of cloud architectures, the Well Architected Framework, and DevOps best practices can greatly impact the success you have. A lot goes into architecting and engineering solutions in a way that maximizes your business value, application reliability, agility, velocity, and key differentiators. This can be a daunting task, but it doesn’t have to be! 2nd Watch has the people, processes, and tools to make managing your cloud infrastructure as code a breeze! Contact us today to find out how.
— Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch
The outbreak of a virulent strain of ransomware, alternately known as WannaCry or WannaCrypt, is finally winding down. A form of malware, the WannaCry attack exploited certain vulnerabilities in Microsoft Windows and infected hundreds of thousands of Windows computers worldwide. As the dust begins to settle, the conversation inevitably turns to what could have been done to prevent it.
The first observation is that most organizations could have been protected simply by following best practices—most notably, the regular installation of known security and critical patches that help to minimize vulnerabilities. WannaCry was not an exotic “zero day” incident. The patch for the underlying vulnerabilities (MS17-010) has been available since March. Companies like 2nd Watch maintain a regular patch schedule to protect their systems from these and similar attacks. It should be noted that due to the prolific nature of this malware and the active attack vectors, 2nd Watch is requiring that all Windows systems must be patched by 5/31/2017.
Other best practices include:
Maintaining support contracts for out-of-date operating systems
Enabling firewalls, in addition to intrusion detection and prevention systems
Proactively monitoring and validating traffic going in and out of the network
Implementing security mechanisms for other points of entry attackers can use, such as email and websites
Deploying application control to prevent suspicious files from executing in addition to behavior monitoring that can thwart unwanted modifications to the system
Employing data categorization and network segmentation to mitigate further exposure and damage to data
Backing up important data. This is the single, most effective way of combating ransomware infection. However, organizations should ensure that backups are appropriately protected or stored off-line so that attackers can’t delete them.
The importance of regularly scheduled patching and keeping systems up-to-date cannot be overemphasized. It may not be sexy, but it is highly effective.
All of these recommendations seem simple enough, but why did the outbreak spread so quickly if the vulnerabilities were known and patches were readily available? It spread because the patches were released for currently supported systems, but the vulnerability has been present in all versions of Windows dating back to Windows XP. For these older systems – no longer supported by Microsoft but still widely used – the patches weren’t there in the first place. One of the highest profile victims, Britain’s National Health Service, discovered that 90 percent of NHS trusts run at least one Windows XP device, an operating system Microsoft first introduced in 2001 and hasn’t supported since 2014. In fact, it was only because of the high-profile nature of this malware that Microsoft took the rare step this week of publishing a patch for Windows XP, Windows Server 2003 and Windows 8.
This brings us to the challenging topic of “technical debt”—the extra cost and effort to continue using older technology. The WannaCry/WannaCrypt outbreak is simply the most recent teachable moment about those costs.
A big benefit of moving to cloud computing is its ability to help rid one’s organization of technical debt. By migrating workloads into the cloud, and even better, by evolving those workloads into modern, cloud-native architectures, the issue of supporting older servers and operating systems is minimized. As Gartner pointed out in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide, through 2018, the cloud managed service market will remain relatively immature, and more than 75% of fully successful implementations will be delivered by highly skilled, forward-looking, boutique managed service providers with a cloud-native, DevOps-centric service delivery approach, like 2nd Watch. A free download of the report can be found here.
Partners like 2nd Watch can also help reduce your overall management cost by tailoring solutions to manage your infrastructure in the cloud. The best practices mentioned above can be automated in many environments– regular patching, resource isolation, traffic monitoring, etc. – are all done for you so you can focus on your business.
Even more important, companies like 2nd Watch help ensure the ongoing optimization of your workloads, both from a cost and a performance point of view. The life-cycle of optimization and modernization of your cloud environments is perhaps the single grea mechanism to ensure that you never take on and retain high levels of technical debt.
Controlling costs is one of the grea challenges facing IT and Finance managers today. The cloud, by nature, makes it easy to spin up new environments and resources that can cost thousands of dollars each month. And, while there are many ways to help control costs, one of the simplest and most effective methods is to set and manage cloud spend-to-budget. While most enterprise budgets are set at a business unit or department, for cloud spend, mapping that budget down to the workload can establish strong accountability within the organization.
One popular method that workload owners use to manage spend is to track month-over-month cost variances. However, if costs do not drastically increase from one month to another, this method does very little to control spend. It is only until a department is faced with budget issues that workload owners work diligently to reduce costs. That’s because, when budgets are set for each workload, owners become more aware of how their cloud spend impacts the company financials and tend to more carefully manage their costs.
In this post, we provide four easy steps to help you manage workload spend-to-budget effectively.
Step 1: Group Your Cloud Resources by Workload and Environment
Use a financial management tool such as 2nd Watch CMP Finance Manager to group your cloud resources by workload and its environment (Test, Dev, Prod). This can easily be accomplished by creating a standard where each workload/environment has its own cloud account, or by using tags to identify the resources associated with each workload. If using tags, use a tag for the workload name such as workload_name: and a tag for the environment such as environment:. More tagging best practices can be found here.
Step 2: Group Your Workloads and Environments by Business Group
Once your resources are grouped by workload/environment, CMP Finance Manager will allow you to organize your workload/environments into business groups. For example:
a. Business Group 1
i. Workload A
1. Workload A Dev
2. Workload A Test
3. Workload A Prod
ii. Workload B
1. Workload B Dev
2. Workload B Test
3. Workload B Prod
b. Business Group 2
i. Workload C
1. Workload C Dev
2. Workload C Test
3. Workload C Prod
ii. Workload D
1. Workload D Dev
2. Workload D Test
3. Workload D Prod
Step 3: Set Budgets
At this point, you are ready to set up budgets for each of your workloads (each workload/environment and the total workload as you may have different owners). We suggest you set annual budgets aligned to your fiscal year and have the tool you use programmatically recalculate the budget at the end of each month with the amount remaining in your annual budget.
Step 4: Create Alerts
The final step is to create alerts to notify owners and yourself when workloads either have exceeded or are on track to exceed the current month or annual budget amount. Here are some budget notifications we recommend:
ME forecast exceeds month budget
MTD spend exceeds MTD budget
MTD spend exceeds month budget
Daily spend exceed daily budget
YE forecast exceeds year budget
YTD spend exceeds YE budget
Once alerts are set, owners can make timely decisions regarding spend. The owner can now proactively shift to spot instances, purchase reserved instances, change instance sizes, park the environment when not in use, or even refactor the application to take advantage of cloud native services like AWS Lambda.
Our experience has shown that enterprises that diligently set up and manage spend-to-budget by workload have more control of their costs and ultimately, spend less on their cloud environments without sacrificing user experience.
Without a doubt, AWS has fundamentally changed how modern enterprises deploy IT infrastructure. Their services are flexible, cost effective, scalable, secure and reliable. And while moving from on-premise data centers to the cloud is, in most cases, the smart move; once there managing your costs becomes much more complex.
On-premise costs are straight forward, enterprises purchase servers and amortize their costs over the expected life. Shared services such as internet access, racks, power and cooling are proportionally allocated to the cost of each server. AWS on the other hand, invoices each usage type separately. For example, if you are running a basic EC2 instance, you will not only be charged for the EC2 box usage but also the data transfer, EBS Storage and associated snapshots. You could end up with as many as 13 line items of cost for a single EC2.
Example: Pricing line items for a single c4.xlarge Linux virtual machine running in the US East Region (Click on image to view larger)
When examining the composition of various workload types the numbers of line items to manage will vary. A traditional VM-based workload may have 50 cost line items for every $1,000 of spend while an agile, cloud-native workload may have as many as 500 per $1,000 and a dynamic workload leveraging spot instances may have upwards of 1,200 per $1,000. This “parts bin” approach to pricing makes the job of cost account challenging.
To address this complexity and enable accurate cost accounting of your cloud costs; we recommend creating a business-relevant financial tagging schema to organize your resources and associated cost line items based on your specific financial accounting structure.
Here are some recommended financial management tags you should consider (Click on image to view larger):
AWS Tagging data integrity is extremely important in ensuring the quality of the information it provides and is directly dependent upon the rigor applied in adopting a systematic and disciplined approach to AWS Tagging.
Financial Management Tagging – Best Practices
Create a framework or standard for your enterprise that outlines required tag names, tag formatting rules, and governance of tags.
Tags should be enforced and automated at startup of the resource via Cloud Formation templates or other infrastructure as code tools, such as Terraform, to ensure cost accounting details are captures from time of launch.
NOTE: Tags are point in time based. If a resource is launched without being tagged and then tagged sometime in the future, all hours the resource ran prior to being tagged will not be included in tag reports in the AWS console.
Manually creating tags and associated values is strongly discouraged as it leads to miss-tagged and untagged resources and in-accurate cost accounting
Select all upper case or all lower-case keys and values to avoid discrepancies with capitalization.
NOTE: “Production” and “production” are considered two different tag names or values.
Monitor resources with AWS Config Rules and alert for newly created resources that are not tagged
Once your tagging schema is created, automation is in place to tag resources during startup and alerts are set up to ensure tagging is managed, you can accurately to view, track and report your cost and usage using any of your tagging dimensions.
Financial Management Reporting – Best Practices
Using your tagging schema, group your resources by workload.
Apply Reserved Instance discounts to the workloads you intended them to be for.
NOTE: 2nd Watch’s CMP Finance Manager tool converts reserved instances into resources so that you can add them to the workload they were intended for.
Organize your groups to match your specific multi-level financial reporting structure.
Managed shared resources
Create groups for shared resources. If you have resources that are shared across multiple workloads such as a database used my multiple applications or virtual machines with more than one applications running on it, create groups to capture these costs and allocate them proportionally to the applications using them.
Manage un-taggable resources
Create a group for un-taggable resources. Some AWS resources are not taggable and should be grouped together and their associated costs proportionally allocated to all applications.
Manage spend to budget
Create budgets and budget alerts for each group to ensure you stay in budget throughout the year.
Key alerts
Forecasted month end cost exceeds alert threshold
MTD cost is over alert threshold
Forecasted year end cost exceeds alert threshold
YTD cost is over alert threshold
Sign up to receive monthly cost and usage reports for integration into your internal cost accounting system.
Cost by application, environment, business unit etc.
Even though AWS’ “parts bin” approach to pricing is complicated, following these guidelines will help ensure accurate cost accounting of your cloud spend.