An Introduction to AWS Proton

As a business scales, so does its software and infrastructure. As desired outcomes adapt and become more complex that can quickly cause a lot of overhead and difficulty for platform teams to manage over time and these challenges often limit the benefits of embracing containers and serverless. Shared services offer many advantages in these scenarios by providing a consistent developer experience while also increasing productivity and effectivity of governance and cost management.

Introduced in December 2020 Amazon Web Services announced the general availability of Proton: an application targeted at providing tooling to manage complex environments while bridging infrastructure and deployment for developers. In this blog we will take a closer look into the benefits of the AWS Proton service offering.

What is AWS Proton?

AWS Proton is a fully managed delivery service, targeted at container and serverless workloads, that provides engineering teams the tooling to automate provisioning and deploy applications while enabling them to provide observability and enforce compliance and best practices. With AWS Proton, development teams utilize resources for infrastructure and to deploy their code. This in turn increases developer productivity by allowing them to focus on their code, software delivery, reduce management overhead, and increase release frequency. Teams can use AWS Proton through the AWS Console and the AWS CLI, allowing for teams to get started quickly and automate complicated operations over time.

How does it work?

The AWS Proton framework allows administrators to define versioned templates which standardize infrastructure, enforce guard rails, leverage Infrastructure as Code with CloudFormation, and provide CI/CD with Code Pipeline and Code Build to automate provisioning and deployments. Once service templates are defined, developers can choose a template and use it to deploy their software. As new code is released, the CI/CD pipelines automatically deploys the changes. Additionally, as new template versions are defined, AWS Proton provides a “one-click” interface which allows administrators to roll out infrastructure updates across all the outdated template versions.

When is AWS Proton right for you?

AWS Proton is built for teams looking to centrally manage their cloud resources. The service interface is built for teams to provision deploy and monitor applications. AWS Proton is worth considering if you are using cloud native services like Serverless applications or if you utilize containers in AWS. The benefits continually grow when working with a service-oriented architecture, microservices, or distributed software as it eases release management, reduces lead time, and creates an environment for teams to operate within a set of rules with little to no additional overhead. AWS Proton is also a good option if you are looking to introduce Infrastructure as Code or CI/CD pipelines to new or even existing software as AWS Proton supports linking existing resources.

Getting Started with AWS Proton is easy!

Platform Administrators

Since AWS Proton itself is free and you only pay for the underlying resources, you are only a few steps away from giving it a try! First a member of the platform infrastructure team creates an environment template. An environment defines infrastructure that is foundational to your applications and services including compute networking (VPCs), Code Pipelines, Security, and Monitoring. Environments are defined via CloudFormation templates and use Jinja for parameters rather than the conventional parameters section in standard CloudFormation templates. You can find template parameter examples in the AWS documentation. You can create, view, update, and manage your environment templates and their versions in the AWS Console.

Introduction to AWS Proton Service | 2nd Watch

Once an environment template is created the platform administrator would create a service template which defines all resources that are logically relative to a service. For example, if we had a container which performs some ETL this could contain an ECR Repository, ECS Cluster, ECS Service Definition, ECS Task Definition, IAM roles, and the ETL source and target storage.

In another example, we could have an asynchronous lambda which performs some background tasks and its corresponding execution role. You could also consider using schema files for parameter validation! Like environment templates, you can create, view, update, and manage your service templates and their versions in the AWS Console.

What is AWS Proton?

Once the templates have been created the platform administrator can publish the templates and provision the environment. Since services also include CI/CD pipelines platform administrators should also configure repository connections by creating the GitHub app connector. This is done in the AWS Developer Tools service or a link can be found on the AWS Proton page in the Console.

Once authorized, the GitHub app is automatically created and integrated with AWS and CI/CD pipelines will automatically detect available connections during service configuration.

How does AWS Proton Work work?

 

At this time platform administrators should see a stack which contains the environment’s resources. They can validate each resource, interconnectivity, security, audits, and operational excellence.

Developers

At this point developers can choose which version they will use to deploy their service. Available services can be found in the AWS Console and developers can review the template and requirements before deployment. Once they have selected the target template they choose the repository that contains their service code, the GitHub app connection created by the platform administrator, and any parameters required by the service and CodePipeline.

AWS proton

After some time, developers should be able to see their application stack in CloudFormation, their application’s CodePipeline resources, and the resources for their application accordingly!

lambada

code pipeline

In Closing

AWS Proton is a new and exciting service available for those looking to adopt Infrastructure as Code, enable CI/CD pipelines for their products, and enforce compliance, consistent standards, and best practices across their software and infrastructure. Here we explored a simple use case, but real world scenarios likely require a more thorough examination and implementation.

AWS Proton may require a transition for teams that already utilize IaC, CI/CD, or that have created processes to centrally manage their platform infrastructure. 2nd Watch has over 10 years’ experience in helping companies move to the cloud and implement shared services platforms to simplify modern cloud operations. Start a conversation with a solution expert from 2nd Watch today and together we will assess and create a plan built for your goals and targets!

-Isaiah Grant, Cloud Consultant

rss
Facebooktwitterlinkedinmail

How We Organize Terraform Code at 2nd Watch

When IT organizations adopt infrastructure as code (IaC), the benefits in productivity, quality, and ability to function at scale are manifold. However, the first few steps on the journey to full automation and immutable infrastructure bliss can be a major disruption to a more traditional IT operations team’s established ways of working. One of the common problems faced in adopting infrastructure as code is how to structure the files within a repository in a consistent, intuitive, and scaleable manner. Even IT operations teams whose members have development skills will still face this anxiety-inducing challenge simply because adopting IaC involves new tools whose conventions differ somewhat from more familiar languages and frameworks.

In this blog post, we’ll go over how we structure our IaC repositories within 2nd Watch professional services and managed services engagements with a particular focus on Terraform, an open-source tool by Hashicorp for provisioning infrastructure across multiple cloud providers with a single interface.

First Things First: README.md and .gitignore

The task in any new repository is to create a README file. Many git repositories (especially on Github) have adopted Markdown as a de facto standard format for README files. A good README file will include the following information:

  1. Overview: A brief description of the infrastructure the repo builds. A high-level diagram is often an effective method of expressing this information. 2nd Watch uses LucidChart for general diagrams (exported to PNG or a similar format) and mscgen_js for sequence diagrams.
  2. Pre-requisites: Installation instructions (or links thereto) for any software that must be installed before building or changing the code.
  3. Building The Code: What commands to run in order to build the infrastructure and/or run the tests when applicable. 2nd Watch uses Make in order to provide a single tool with a consistent interface to build all codebases, regardless of language or toolset. If using Make in Windows environments, Windows Subsystem for Linux is recommended for Windows 10 in order to avoid having to write two sets of commands in Makefiles: Bash, and PowerShell.

It’s important that you do not neglect this basic documentation for two reasons (even if you think you’re the only one who will work on the codebase):

  1. The obvious: Writing this critical information down in an easily viewable place makes it easier for other members of your organization to onboard onto your project and will prevent the need for a panicked knowledge transfer when projects change hands.
  2. The not-so-obvious: The act of writing a description of the design clarifies your intent to yourself and will result in a cleaner design and a more coherent repository.

All repositories should also include a .gitignore file with the appropriate settings for Terraform. GitHub’s default Terraform .gitignore is a decent starting point, but in most cases you will not want to ignore .tfvars files because they often contain environment-specific parameters that allow for greater code reuse as we will see later.

Terraform Roots and Multiple Environments

A Terraform root is the unit of work for a single terraform apply command. We group our infrastructure into multiple terraform roots in order to limit our “blast radius” (the amount of damage a single errant terraform apply can cause).

  • Repositories with multiple roots should contain a roots/ directory with a subdirectory for each root (e.g. VPC, one per-application) tf file as the primary entry point.
  • Note that the roots/ directory is optional for repositories that only contain a single root, e.g. infrastructure for an application team which includes only a few resources which should be deployed in concert. In this case, modules/ may be placed in the same directory as tf.
  • Roots which are deployed into multiple environments should include an env/ subdirectory at the same level as tf. Each environment corresponds to a tfvars file under env/ named after the environment, e.g. staging.tfvars. Each .tfvars file contains parameters appropriate for each environment, e.g. EC2 instance sizes.

Here’s what our roots directory might look like for a sample with a VPC and 2 application stacks, and 3 environments (QA, Staging, and Production):

Terraform Roots and Multiple Environments

Terraform modules

Terraform modules are self-contained packages of Terraform configurations that are managed as a group. Modules are used to create reusable components, improve organization, and to treat pieces of infrastructure as a black box. In short, they are the Terraform equivalent of functions or reusable code libraries.

Terraform modules come in two flavors:

  1. Internal modules, whose source code is consumed by roots that live in the same repository as the module.
  2. External modules, whose source code is consumed by roots in multiple repositories. The source code for external modules lives in its own repository, separate from any consumers and separate from other modules to ensure we can version the module correctly.

In this post, we’ll only be covering internal modules.

  • Each internal module should be placed within a subdirectory under modules/.
  • Module subdirectories/repositories should follow the standard module structure per the Terraform docs.
  • External modules should always be pinned at a version: a git revision or a version number. This practice allows for reliable and repeatable builds. Failing to pin module versions may cause a module to be updated between builds by breaking the build without any obvious changes in our code. Even worse, failing to pin our module versions might cause a plan to be generated with changes we did not anticipate.

Here’s what our modules directory might look like:
Terraform modules

Terraform and Other Tools

Terraform is often used alongside other automation tools within the same repository. Some frequent collaborators include Ansible for configuration management and Packer for compiling identical machine images across multiple virtualization platforms or cloud providers. When using Terraform in conjunction with other tools within the same repo, 2nd Watch creates a directory per tool from the root of the repo:

Terraform and Other Tools

Putting it all together

The following illustrates a sample Terraform repository structure with all of the concepts outlined above:

tree terraform repo example

Conclusion

There’s no single repository format that’s optimal, but we’ve found that this standard works for the majority of our use cases in our extensive use of Terraform on dozens of projects. That said, if you find a tweak that works better for your organization – go for it! The structure described in this post will give you a solid and battle-tested starting point to keep your Terraform code organized so your team can stay productive.

Additional resources

  • The Terraform Book by James Turnbull provides an excellent introduction to Terraform all the way through repository structure and collaboration techniques.
  • The Hashicorp AWS VPC Module is one of the most popular modules in the Terraform Registry and is an excellent example of a well-written Terraform module.
  • The source code for James Nugent’s Hashidays NYC 2017 talk code is an exemplary Terraform repository. Although it’s based on an older version of Terraform (before providers were broken out from the main Terraform executable), the code structure, formatting, and use of Makefiles is still current.

For help getting started adopting Infrastructure as Code, contact us.

  • Josh Kodroff, Associate Cloud Consultant
rss
Facebooktwitterlinkedinmail