Cloud Migration & Governance Thoughts: AWS Landing Zone & the Infrastructure as Code debate
What are the biggest AWS Landing Zone challenges we’ve seen? Where do we stand on AWS CloudFormation vs Terraform vs ARM? Our experts weigh in.





What are the biggest AWS Landing Zone challenges we’ve seen? Where do we stand on AWS CloudFormation vs Terraform vs ARM? Our experts weigh in.
When IT organizations adopt infrastructure as code (IaC), the benefits in productivity, quality, and ability to function at scale are manifold. However, the first few steps on the journey to full automation and immutable infrastructure bliss can be a major disruption to a more traditional IT operations team’s established ways of working. One of the common problems faced in adopting infrastructure as code is how to structure the files within a repository in a consistent, intuitive, and scaleable manner. Even IT operations teams whose members have development skills will still face this anxiety-inducing challenge simply because adopting IaC involves new tools whose conventions differ somewhat from more familiar languages and frameworks.
In this blog post, we’ll go over how we structure our IaC repositories within 2nd Watch professional services and managed services engagements with a particular focus on Terraform, an open-source tool by Hashicorp for provisioning infrastructure across multiple cloud providers with a single interface.
The task in any new repository is to create a README file. Many git repositories (especially on Github) have adopted Markdown as a de facto standard format for README files. A good README file will include the following information:
It’s important that you do not neglect this basic documentation for two reasons (even if you think you’re the only one who will work on the codebase):
All repositories should also include a .gitignore file with the appropriate settings for Terraform. GitHub’s default Terraform .gitignore is a decent starting point, but in most cases you will not want to ignore .tfvars files because they often contain environment-specific parameters that allow for greater code reuse as we will see later.
A Terraform root is the unit of work for a single terraform apply command. We group our infrastructure into multiple terraform roots in order to limit our “blast radius” (the amount of damage a single errant terraform apply can cause).
Here’s what our roots directory might look like for a sample with a VPC and 2 application stacks, and 3 environments (QA, Staging, and Production):
Terraform modules are self-contained packages of Terraform configurations that are managed as a group. Modules are used to create reusable components, improve organization, and to treat pieces of infrastructure as a black box. In short, they are the Terraform equivalent of functions or reusable code libraries.
Terraform modules come in two flavors:
In this post, we’ll only be covering internal modules.
Here’s what our modules directory might look like:
Terraform is often used alongside other automation tools within the same repository. Some frequent collaborators include Ansible for configuration management and Packer for compiling identical machine images across multiple virtualization platforms or cloud providers. When using Terraform in conjunction with other tools within the same repo, 2nd Watch creates a directory per tool from the root of the repo:
The following illustrates a sample Terraform repository structure with all of the concepts outlined above:
There’s no single repository format that’s optimal, but we’ve found that this standard works for the majority of our use cases in our extensive use of Terraform on dozens of projects. That said, if you find a tweak that works better for your organization – go for it! The structure described in this post will give you a solid and battle-tested starting point to keep your Terraform code organized so your team can stay productive.
For help getting started adopting Infrastructure as Code, contact us.
In this post, we’ll go over a complete workflow for continuous integration (CI) and continuous delivery (CD) for infrastructure as code (IaC) with just 2 tools: Terraform, and Atlantis.
So what is Terraform? According to the Terraform website:
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
In practice, this means that Terraform allows you to declare what you want your infrastructure to look like – in any cloud provider – and will automatically determine the changes necessary to make it so. Because of its simple syntax and cross-cloud compatibility, it’s 2nd Watch’s choice for infrastructure as code.
When you have multiple collaborators (individ
uals, teams, etc.) working on a Terraform codebase, some common problems are likely to emerge:
And what is Atlantis? Atlantis is an open source tool that allows safe collaboration on Terraform projects by making sure that proposed changes are reviewed and that the proposed change is the actual change which will be executed on your infrastructure. Atlantis is compatible (at the time of writing) with GitHub and Gitlab, so if you’re not using either of these Git hosting systems, you won’t be able to use Atlantis.
Atlantis is deployed as a single binary executable with no system-wide dependencies. An operator adds a GitHub or GitLab token for a repository containing Terraform code. The Atlantis installation process then adds hooks to the repository which allows communication to the Atlantis server during the Pull Request process.
You can run Atlantis in a container or a small virtual machine – the only requirement is that the Terraform instance can communicate with both your version control (e.g. GitHub) and infrastructure (e.g. AWS) you’re changing. Once Atlantis is configured for a repository, the typical workflow is:
The following sequence diagram illustrates the sequence of actions described above:
Atlantis sequence diagram
We can see how our pain points in Terraform collaboration are addressed by Atlantis:
You can see that with minimal additional infrastructure you can establish a safe and reliable CI/CD pipeline for your infrastructure as code, enabling you to get more done safely! To find out how you can deploy a CI/CD pipeline in less than 60 days, Contact Us.
-Josh Kodroff, Associate Cloud Consultant
When it comes to managing cloud-based resources, it’s hard to find a better tool than Hashicorp’s Terraform. Terraform is an ‘infrastructure as code’ application, marrying configuration files with backing APIs to provide a nearly seamless layer over your various cloud environments. It allows you to declaratively define your environments and their resources through a process that is structured, controlled, and collaborative.
One key advantage Terraform provides over other tools (like AWS CloudFormation) is having a rapid development and release cycle fueled by the open source community. This has some major benefits: features and bug fixes are readily available, new products from resource providers are quickly incorporated, and you’re able to submit your own changes to fulfill your own requirements.
Hashicorp recently released v0.10.0 of Terraform, introducing some fundamental changes in the application’s architecture and functionality. We’ll review the three most notable of these changes and how to incorporate them into your existing Terraform projects when migrating to Terraform v.0.10.x.
auto-approve
flag for terraform apply
terraform env
commands replaced by terraform workspace
A brief note on Terraform versions:
Even though Terraform uses a style of semantic versioning, their ‘minor’ versions should be treated as ‘major’ versions.
1. Terraform Providers are no longer distributed as part of the main Terraform distribution
The biggest change in this version is the removal of provider code from the core Terraform application.
Terraform Providers are responsible for understanding API interactions and exposing resources for a particular platform (AWS, Azure, etc). They know how to initialize and call their applications or CLIs, handle authentication and errors, and convert HCL into the appropriate underlying API calls.
It was a logical move to split the providers out into their own distributions. The core Terraform application can now add features and release bug fixes at a faster pace, new providers can be added without affecting the existing core application, and new features can be incorporated and released to existing providers without as much effort. Having split providers also allows you to update your provider distribution and access new resources without necessarily needing to update Terraform itself. One downside of this change is that you have to keep up to date with features, issues, and releases of more projects.
The provider repos can be accessed via the Terraform Providers organization in GitHub. For example, the AWS provider can be found here.
Custom Providers
An extremely valuable side-effect of having separate Terraform Providers is the ability to create your own, custom providers. A custom provider allows you to specify new or modified attributes for existing resources in existing providers, add new or unsupported resources in existing providers, or generate your own resources for your own platform or application.
You can find more information on creating a custom provider from the Terraform Provider Plugin documentation.
1.1 Configuration
The nicest part of this change is that it doesn’t really require any additional modifications to your existing Terraform code if you were already using a Provider block.
If you don’t already have a provider block defined, you can find their configurations from the Terraform Providers documentation.
You simply need to call the terraform init
command before you can perform any other action. If you fail to do so, you’ll receive an error informing you of the required actions (img 1a).
After successfully reinitializing your project, you will be provided with the list of providers that were installed as well as the versions requested (img 1b).
You’ll notice that Terraform suggests versions for the providers we are using – this is because we did not specify any specific versions of our providers in code. Since providers are now independently released entities, we have to tell Terraform what code it should download and use to run our project.
(Image 1a: Notice of required reinitialization)
(Image 1b: Response from successful reinitialization)
Providers are released separately from Terraform itself, and maintain their own version numbers.
You can specify the version(s) you want to target in your existing provider blocks by adding the version
property (code block 1). These versions should follow the semantic versioning specification (similar to node’s package.json or python’s requirements.txt).
For production use, it is recommended to limit the acceptable provider versions to ensure that new versions with breaking changes are not automatically installed.
(Code Block 1: Provider Config)
provider "aws" {
version = "0.1.4"
allowed_account_ids = ["1234567890"]
region = "us-west-2"
}
(Image 1c: Currently defined provider configuration)
2. New auto-approve flag for terraform apply
In previous versions, running terraform apply
would immediately apply any changes between your project and saved state.
Your normal workflow would likely be:
run terraform plan
followed by terraform apply
and hope nothing changed in between.
This version introduced a new auto-approve
flag which will control the behavior of terraform apply
.
Deprecation Notice
This flag is set to
true
to maintain backwards compatibility, but will quickly change tofalse
in the near future.
2.1 auto-approve=true (current default)
When set to true, terraform apply
will work like it has in previous versions.
If you want to maintain this functionality, you should upgrade your scripts, build systems, etc now as this default value will change in a future Terraform release.
(Code Block 2: Apply with default behavior)
# Apply changes immediately without plan file
terraform apply --auto-approve=true
2.2 auto-approve=false
When set to false
, Terraform will present the user with the execution plan and pause for interactive confirmation (img 2a).
If the user provides any response other than yes
, terraform will exit without applying any changes.
If the user confirms the execution plan with a yes
response, Terraform will then apply the planned changes (and only those changes).
If you are trying to automate your Terraform scripts, you might want to consider producing a plan file for review, then providing explicit approval to apply the changes from the plan file.
(Code Block 3: Apply plan with explicit approval)
# Create Plan
terraform plan -out=tfplan
# Apply approved plan
terraform apply tfplan --auto-approve=true
(Image 2a: Terraform apply with execution plan)
3. Existing terraform env commands replaced by terraform workspace
The terraform env
family of commands were replaced with terraform workspace
to help alleviate some confusion in functionality. Workspaces are very useful, and can do much more than just split up environment state (which they aren’t necessarily used for). I recommend checking them out and seeing if they can improve your projects.
There is not much to do here other than switch the command invocation, but the previous commands still currently work for now (but are deprecated).
License Warning
You are using an UNLICENSED copy of Scroll Office.
Do you find Scroll Office useful?
Consider purchasing it today: https://www.k15t.com/software/scroll-office
— Steve Byerly, Principal SDE (IV), Cloud, 2nd Watch
Introduction
Let’s start with a small look at the current landscape of technology and how we arrived here. There aren’t very many areas of tech that have not been, or are not currently, in a state of fluctuation. Everything from software delivery vehicles and development practices, to infrastructure creation has experienced some degree of transformation over the past several years. From VMs to Containers, it seems like almost every day the technology tool belt grows a little bigger, and our world gets a little better (though perhaps more complex) due to these advancements. For me, this was incredibly apparent when I began to delve into configuration management which later evolved into what we now call “infrastructure as code”.
The transformation of the development process began with simple systems that we once used to manage a few machines (like bash scripts or Makefiles) which then morphed into more complex systems (CF Engine, Puppet, and Chef) to manage thousands of systems. As configuration management software became more mature, engineers and developers began leaning on them to do more things. With the advent of hypervisors and the rise of virtual machines, it was only a short time before hardware requests changed to API requests and thus the birth of infrastructure as a service (IaaS). With all the new capabilities and options in this brave new world, we once again started to lean on our configuration management systems—this time for provisioning, and not just convergence.
Provisioning & Convergence
I mentioned two terms that I want to clarify; provisioning and convergence. Say you were a car manufacturer and you wanted to make a car. Provisioning would be the step in which you request the raw materials to make the parts for your automobile. This is where we would use tools like Terraform, CloudFormation, or Heat. Whereas convergence is the assembly line by which we check each part and assemble the final product (utilizing config management software).
By and large, the former tends to be declarative with little in the way of conditionals or logic, while the latter is designed to be robust and malleable software that supports all the systems we run and plan on running. This is the frame for the remainder of what we are going to talk about.
By separating the concerns of our systems, we can create a clear delineation of the purpose for each tool so we don’t feel like we are trying to jam everything into an interface that doesn’t have the most support for our platform or more importantly our users. The remainder of this post will be directed towards the provisioning aspect of configuration management.
Standards and Standardization
These are two different things in my mind. Standardization is extremely prescriptive and can often seem particularly oppressive to professional knowledge workers, such as engineers or developers. It can be seen as taking the innovation away from the job. Whereas standards provide boundaries, frame the problem, and allow for innovative ways of approaching solutions. I am not saying standardization in some areas is entirely bad, but we should let the people who do the work have the opportunity to grow and innovate in their own way with guidance. The topic of standards and standardization is part of a larger conversation about culture and change. We intend to follow up with a series of blog articles relating to organizational change in the era of the public cloud in the coming weeks.
So, let’s say that we make a standard for our new EC2 instances running Ubuntu. We’ll say that all instances must be running the la official Canonical Ubuntu 14.04 AMI and must have these three tags; Owner, Environment, and Application. How can we enforce that in development of our infrastructure? On AWS, we can create AWS Config Rules, but that is reactive and requires ad-hoc remediation. What we really want is a more prescriptive approach bringing our standards closer to the development pipeline. One of the ways I like to solve this issue is by creating an abstraction. Say we have a terraform template that looks like this:
# Create a new instance of the la Ubuntu 14.04 on an
provider "aws" { region = "us-west-2"
}
data "aws_ami" "ubuntu" { most_recent = true
filter {
name = "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}
filter {
name = "virtualization-type" values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" { ami =
"${data.aws_ami.ubuntu.id}" instance_type = "t2.micro"
tags {
Owner = "DevOps Ninja" Environment = "Dev" Application = "Web01"
}
}
This would meet the standard that we have set forth, but we are relying on the developer or engineer to adhere to that standard. What if we enforce this standard by codifying it in an abstraction? Let’s take that existing template and turn it into a terraform module instead.
Module
# Create a new instance of the la Ubuntu 14.04 on an
variable "aws_region" {} variable "ec2_owner" {} variable "ec2_env"
{} variable "ec2_app" {}
variable "ec2_instance_type" {}
provider "aws" {
region = "${var.aws_region}"
}
data "aws_ami" "ubuntu" { most_recent = true
filter {
name = "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}
filter {
name = "virtualization-type" values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" { ami =
"${data.aws_ami.ubuntu.id}" instance_type =
"${var.ec2_instance_type}"
tags {
Owner = "${var.ec2_owner}" Environment = "${var.ec2_env}" Application =
"${var.ec2_app}"
}
}
Now we can have our developers and engineers leverage our tf_ubuntu_ec2_instance module.
New Terraform Plan
module "Web01" { source =
"git::ssh://git@github.com/SomeOrg/tf_u buntu_ec2_instance"
aws_region = "us-west-2" ec2_owner = "DevOps Ninja" ec2_env = "Dev"
ec2_app = "Web01"
}
This doesn’t enforce the usage of the module, but it does create an abstraction that provides an easy way to maintain standards without a ton of overhead, it also provides an example for further creation of modules that enforce these particular standards.
This leads us into another method of implementing standards but becomes more prescriptive and falls into the category of standardization (eek!). One of the most underutilized services in the AWS product stable has to be Service Catalog.
AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.
The Interface
Once we have a few of these projects in place (e.g. a service catalog or a repo full of composable modules for infrastructure that meet our standards) how do we serve them out? How you spur adoption of these tools and how they are consumed can be very different depending on your organization structure. We don’t want to upset workflow and how work gets done, we just want it to go faster and be more reliable. This is what we talk about when we mention the interface. Whichever way work flows in, we should supplement it with some type of software or automation to link those pieces of work together. Here are a few examples of how this might look (depending on your organization):
1.) Central IT Managed Provisioning
If you have an organization that manages requests for infrastructure, having this new shift in paradigm might seem daunting. The interface in this case is the ticketing system. This is where we would create an integration with our ticketing software to automatically pull the correct project from service catalog or module repo based on some criteria in the ticket. The interface doesn’t change but is instead supplemented by some automation to answer these requests, saving time and providing faster delivery of service.
2.) Full Stack Engineers
If you have engineers that develop software and the infrastructure that runs their applications this is the easiest scenario to address in some regards and the hardest in others. Your interface might be a build server, or it could simply be the adoption of an internal open source model where each team develops modules and shares them in a common place, constantly trying to save time and not re-invent the wheel.
Supplementing with software or automation can be done in a ton of ways. Check out an example Kelsey Hightower wrote using Jira.
“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” – John Gall
All good automation starts out with a manual and well-defined process. Standardizing & automating infrastructure development processes begins with understanding how our internal teams can be best organized. This allows us to efficiently perform work before we can begin automating. Work with your teammates to create a value stream map to understand the process entirely before doing anything towards the efforts of automating a workflow.
With 2nd Watch designs and automation you can deploy quicker, learn faster and modify as needed with Continuous Integration / Continuous Deployment (CI/CD). Our Workload Solutions transform on-premises workloads to digital solutions in the public cloud with next generation products and services. To accelerate your infrastructure development so that you can deploy faster, learn more often and adapt to customer requirements more effectively, speak with a 2nd Watch cloud deployment expert today.
– Lars Cromley, Director of Engineering, Automation, 2nd Watch