Rehost vs Refactor vs Replatform | AppMod Essentials

Migrating workloads or an application to the cloud can seem daunting for any organization. The cloud is synonymous with industry buzzwords such as DevOps, digital transformation, opensource, and more. As of 2021, AWS has over 200 products and services.

Nowadays, every other LinkedIn post is somehow related to the cloud. Sound familiar? Maybe a bit intimidating? If so, you are not alone! Organizations often hope that operating in the cloud will help them become more agile, enhance business continuity, or reduce technical debt. All of which are achievable in a cloud environment with proper planning. 

Benjamin Franklin once said, “By failing to prepare, you are preparing to fail.” This sentiment is true not only in life but also in technology. Any successful IT project has a strategy and tangible business outcomes. Project managers must establish these before any “actual work” begins. Without this, leadership teams may not know if the project is on task and on schedule. Technical teams may struggle to determine where to start or what to prioritize. Here we’ll explore industry-standard strategies that organizations can deploy to begin their cloud journey and help technical leaders decide which path to take. 

What is Cloud Migration? 

Cloud migration is when an organization decides to move its data, applications, or other IT capabilities into a cloud service provider (CSP) such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Some organizations may decide to migrate all IT assets into the cloud; however, most organizations keep some services on-premises in a hybrid environment for various reasons. Performing a migration to the cloud may consist of multiple CSPs or even a private cloud. 

What Are the Different Strategies for Cloud Migration? 

Gartner recognizes five cloud migration strategies, nicknamed “The 5Rs.” Individually they are called rehost, refactor, revise (a.k.a. replatform), rebuild, and replace, each with benefits and drawbacks. This blog focuses on three of those five migration approaches—rehost, refactor, and replatform—as they play a significant role in application modernization. 

What is Rehost in the Cloud?

Rehost, or “lift and shift,” is the process of migrating a workload into the cloud as-is without any modifications. Rehosting usually involves infrastructure-as-a-service (IaaS) technologies in a cloud provider such as AWS EC2 or Azure VM’s. Organizations with little cloud experience may consider this strategy because it is an easy start to their cloud journey. Cloud service providers are constantly creating new services for rehosting to make the process even easier. This strategy is less complex, so the timeline to complete a rehost migration can be significantly shorter than other strategies. Organizations often rehost workloads and then modernize after gaining more cloud knowledge and experience. 

Rehosting Pros:

  • No architecture changes – Organizations can migrate workloads as-is, which benefits those with little cloud experience. 
  • Fastest migration method – Rehosting is often the quickest path to the cloud. This method is an excellent advantage for organizations that need to vacate an on-premises data center or colocation. 
  • Organizational changes are not necessary – Organizational processes and strategies to manage workloads can remain the same since architectures are not changing. Organizations will need to learn new tools for the selected cloud provider, but the day-to-day tasks will not change.  

  Rehosting Cons: 

  • High costs – Monthly spending will quickly add up in the cloud without modernizing applications. Organizations must budget appropriately for rehosting migrations. 
  • Lack of innovation – Rehosting does not take advantage of the variety of innovative and modern technologies available in the cloud.  
  • Does not improve the customer experience – Without change, applications cannot improve, which means customers will have a similar experience in the cloud. 

What Refactor Means?

Use the refactoring technique to update and optimize applications for the cloud. Refactoring often involves “app modernization” or updating the application’s existing code to take full advantage of cloud features and flexibility. This strategy can be complex because it requires source code changes and introduces modern technologies to the organization. These changes will need to be thoroughly tested and optimized, leading to possible delays. Therefore, organizations should take small steps by refactoring one or two modules at a time to correct issues and gaps at a smaller scale. Although refactoring may be the most time-consuming, it can provide the best return on investment (ROI) once complete.  

Refactoring Pros: 

  • Cost reduction – Since applications are being optimized for the cloud, refactoring can provide the highest ROI and reduce the total cost of ownership (TCO). 
  • More flexible application architectures – Refactoring allows application owners the opportunity to explore the landscape of services available in the cloud and decide which ones fit best. 
  • Increased resiliency – technologies and concepts like auto-scaling, immutable infrastructure, and automation can increase application resiliency and reliability. Organizations should consider all of these when refactoring. 

Refactoring Cons:

  • A lot of change – Technology and cultural changes can be brutally painful. Cloud migrations often combine both, which compounds the pain. Add the complexity of refactoring, and you may have full-blown mutiny without careful planning and strong leadership. Refactoring migrations are not for the faint of heart, so tread lightly. 
  • Advanced cloud knowledge and experience are needed – Organizations lacking cloud experience may find it challenging to refactor applications by themselves. Organizations may consider using a consulting firm to address skillset gaps. 
  • Lengthy project timelines – Refactoring hundreds of applications doesn’t happen overnight. Organizations need to establish realistic timelines before starting a refactor migration. 

What is Replatform in Cloud?

Replatforming is a happy medium between refactoring and rehosting and applies a series of changes to the application to fit the cloud better without rearchitecting the whole thing versus completely overhauling the application as you would expect from refactoring. Replatforming projects often involve rearchitecting the database to a more cloud-native solution, adding scaling mechanisms, or containerizing applications. 

Replatorming Pros:

  • Reduces cost – If organizations take cost-savings measures during replatforming, they will see a reduction in technical operating expenses. 
  • Acceptable compromise – Replatforming is considered a happy medium of adding features and technical capabilities without jeopardizing migration timelines. 
  • Adds cloud-native features – Replatforming can add cloud technologies like auto-scaling, managed storage services, infrastructure as code (IaC), and more. These capabilities can reduce costs and improve customer experience. 

Replatforming Cons:

  • Scope creep may occur – Organizations may struggle to draw a line in the sand when replatforming. It can be challenging to decide which cloud technologies to prioritize. 
  • Limits the amount of change that can occur – Everything cannot be accomplished at once when replatforming. Technical leaders must decide what can be done given the migration timeline then add the remaining items to a backlog. 
  • Cloud and automation skills needed – Organizations lacking cloud experience may struggle replatforming workloads by themselves. 

Which cloud migration strategy is best for your organization? 

As stated above, it is essential to have clear business objectives for your organization’s cloud migration. Just as important is establishing a timeline for the migration. Both will help technical leaders and application owners decide which strategy is best. Below are some common goals organizations have for migrating to the cloud. 

Common business goals for cloud migrations:

  • Reduce technical debt 
  • Improve customer’s digital experience 
  • Become more agile to respond to change faster 
  • Ensure business continuity 
  • Evacuate on-premises data centers and colocations 
  • Create a culture of automation 

Determining the best migration strategy is key to getting the most out of the cloud and meeting your business objectives. It is common for organizations to use all three of these strategies in tandem and often work with trusted advisors like 2nd Watch to determine and implement the best. When planning your cloud migration strategy, consider these questions:  

Cloud Migration Strategy Considerations:

  • Is there a hard date for migrating the application? 
  • How long will it take to modernize? 
  • What are the costs for “lift and shift,” refactoring, and/or replatforming? 
  • When is the application being retired? 
  • Can the operational team(s) support modern architectures? 

Conclusion 

In today’s world, the cloud is where the most innovation in technology occurs. Companies that want to be a part of modern technology advancements should seriously consider migrating to the cloud. Organizations can achieve successful cloud migrations with the right strategy, clear business goals, and proper skillsets. 

2nd Watch is an AWS Premier Partner, Google Cloud Partner, and Microsoft Gold partner, providing professional and managed cloud services to enterprises. Our subject matter experts and software-enabled services provide you with tested, proven, and trusted solutions in all aspects of cloud migration and application modernization.  

Contact us to schedule a discussion on how we can help you achieve your 2022 cloud modernization objectives. 

By Jacob Acton, 2nd Watch Cloud Consultant 


Well-Architected Framework Reviews

“Whatever you do in life, surround yourself with smart people who argue with you.” – John Wooden

Many AWS customers and practitioners have leveraged the Well-Architected Framework methodology in building new applications or migrating existing applications. Once a build or migration is complete, how many companies implement Well-Architected Framework reviews and perform those reviews regularly? We have found that many companies today do not conduct regular Well Architected Framework reviews and as a result, potentially face a multitude of risks.

What is a Well-Architected Framework?

The Well-Architected Framework is a methodology designed to provide high-level guidance on best practices when using AWS products and services. Whether building new or migrating existing workloads, security, reliability, performance, cost optimization, and operational excellence are vital to the integrity of the workload and can even be critical to the success of the company. A review of your architecture is especially critical when the rate of innovation of new products and services are being created and implemented by Cloud Service Providers (CSP).

2nd Watch Well-Architected Framework Reviews

At 2nd Watch, we provide  Well-Architected Framework reviews for our existing and prospective clients. The review process allows customers to make informed decisions about architecture decisions, the potential impact those decisions have on their business, and tradeoffs they are making. 2nd Watch offers its clients free Well-Architected Framework reviews—conducted on a regular basis—for mission-critical workloads that could have a negative business impact upon failure.

Examples of issues we have uncovered and remediated through Well-Architected Reviews:

  • Security: Not protecting data in transit and at rest through encryption
  • Cost: Low utilization and inability to map cost to business units
  • Reliability: Single points of failure where recovery processes have not been tested
  • Performance: A lack of benchmarking or proactive selection of services and sizing
  • Operations: Not tracking changes to configuration management on your workload

Using a standard based methodology, 2nd Watch will work closely with your team to thoroughly review the workload and will produce a detailed report outlining actionable items, timeframes, as well as provide prescriptive guidance in each of the key architectural pillars.

In reviewing your workload and architecture, 2nd Watch will identify areas of improvement, along with a detailed report of our findings. A separate paid engagement will be available to clients and prospects who want our AWS Certified Solutions Architects and AWS Certified DevOps Engineer Professionals to remediate our findings. To schedule your free Well-Architected Framework review, contact 2nd Watch today.

 

— Chris Resch, EVP Cloud Solutions, 2nd Watch


Migrating to Terraform v0.10.x

When it comes to managing cloud-based resources, it’s hard to find a better tool than Hashicorp’s Terraform. Terraform is an ‘infrastructure as code’ application, marrying configuration files with backing APIs to provide a nearly seamless layer over your various cloud environments. It allows you to declaratively define your environments and their resources through a process that is structured, controlled, and collaborative.

One key advantage Terraform provides over other tools (like AWS CloudFormation) is having a rapid development and release cycle fueled by the open source community. This has some major benefits: features and bug fixes are readily available, new products from resource providers are quickly incorporated, and you’re able to submit your own changes to fulfill your own requirements.

Hashicorp recently released v0.10.0 of Terraform, introducing some fundamental changes in the application’s architecture and functionality. We’ll review the three most notable of these changes and how to incorporate them into your existing Terraform projects when migrating to Terraform v.0.10.x.

  1. Terraform Providers are no longer distributed as part of the main Terraform distribution
  2. New auto-approve flag for terraform apply
  3. Existing terraform env commands replaced by terraform workspace

A brief note on Terraform versions:

Even though Terraform uses a style of semantic versioning, their ‘minor’ versions should be treated as ‘major’ versions.

1. Terraform Providers are no longer distributed as part of the main Terraform distribution

The biggest change in this version is the removal of provider code from the core Terraform application.

Terraform Providers are responsible for understanding API interactions and exposing resources for a particular platform (AWS, Azure, etc). They know how to initialize and call their applications or CLIs, handle authentication and errors, and convert HCL into the appropriate underlying API calls.

It was a logical move to split the providers out into their own distributions. The core Terraform application can now add features and release bug fixes at a faster pace, new providers can be added without affecting the existing core application, and new features can be incorporated and released to existing providers without as much effort. Having split providers also allows you to update your provider distribution and access new resources without necessarily needing to update Terraform itself. One downside of this change is that you have to keep up to date with features, issues, and releases of more projects.

The provider repos can be accessed via the Terraform Providers organization in GitHub. For example, the AWS provider can be found here.

Custom Providers

An extremely valuable side-effect of having separate Terraform Providers is the ability to create your own, custom providers. A custom provider allows you to specify new or modified attributes for existing resources in existing providers, add new or unsupported resources in existing providers, or generate your own resources for your own platform or application.

You can find more information on creating a custom provider from the Terraform Provider Plugin documentation.

1.1 Configuration

The nicest part of this change is that it doesn’t really require any additional modifications to your existing Terraform code if you were already using a Provider block.

If you don’t already have a provider block defined, you can find their configurations from the Terraform Providers documentation.

You simply need to call the terraform init command before you can perform any other action. If you fail to do so, you’ll receive an error informing you of the required actions (img 1a).

After successfully reinitializing your project, you will be provided with the list of providers that were installed as well as the versions requested (img 1b).

You’ll notice that Terraform suggests versions for the providers we are using – this is because we did not specify any specific versions of our providers in code. Since providers are now independently released entities, we have to tell Terraform what code it should download and use to run our project.

(Image 1a: Notice of required reinitialization)

 

 

 

 

 

 

 

 

(Image 1b: Response from successful reinitialization)

 

 

 

 

 

 

 

 

Providers are released separately from Terraform itself, and maintain their own version numbers.

You can specify the version(s) you want to target in your existing provider blocks by adding the version property (code block 1). These versions should follow the semantic versioning specification (similar to node’s package.json or python’s requirements.txt).

For production use, it is recommended to limit the acceptable provider versions to ensure that new versions with breaking changes are not automatically installed.

(Code Block 1: Provider Config)

provider "aws" {
  version = "0.1.4"
  allowed_account_ids = ["1234567890"]
  region = "us-west-2"
}

 (Image 1c: Currently defined provider configuration)

 

 

 

 

 

 

 

 

2. New auto-approve flag for terraform apply

In previous versions, running terraform apply would immediately apply any changes between your project and saved state.

Your normal workflow would likely be:
run terraform plan followed by terraform apply and hope nothing changed in between.

This version introduced a new auto-approve flag which will control the behavior of terraform apply.

Deprecation Notice

This flag is set to true to maintain backwards compatibility, but will quickly change to false in the near future.

2.1 auto-approve=true (current default)

When set to true, terraform apply will work like it has in previous versions.

If you want to maintain this functionality, you should upgrade your scripts, build systems, etc now as this default value will change in a future Terraform release.

(Code Block 2: Apply with default behavior)

# Apply changes immediately without plan file
terraform apply --auto-approve=true

2.2 auto-approve=false

When set to false, Terraform will present the user with the execution plan and pause for interactive confirmation (img 2a).

If the user provides any response other than yes, terraform will exit without applying any changes.

If the user confirms the execution plan with a yes response, Terraform will then apply the planned changes (and only those changes).

If you are trying to automate your Terraform scripts, you might want to consider producing a plan file for review, then providing explicit approval to apply the changes from the plan file.

(Code Block 3: Apply plan with explicit approval)

# Create Plan
terraform plan -out=tfplan

# Apply approved plan
terraform apply tfplan --auto-approve=true

(Image 2a: Terraform apply with execution plan)

 

 

 

 

 

 

3. Existing terraform env commands replaced by terraform workspace

The terraform env family of commands were replaced with terraform workspace to help alleviate some confusion in functionality. Workspaces are very useful, and can do much more than just split up environment state (which they aren’t necessarily used for). I recommend checking them out and seeing if they can improve your projects.

There is not much to do here other than switch the command invocation, but the previous commands still currently work for now (but are deprecated).

 

License Warning

You are using an UNLICENSED copy of Scroll Office.

Do you find Scroll Office useful?
Consider purchasing it today: https://www.k15t.com/software/scroll-office

 

— Steve Byerly, Principal SDE (IV), Cloud, 2nd Watch


Migrating Terraform Remote State to a “Backend” in Terraform v.0.9+

(AKA: Where the heck did ‘terraform remote config’ go?!!!)

If you are working with cloud-based architectures or working in a DevOps shop, you’ve no doubt been managing your infrastructure as code. It’s also likely that you are familiar with tools like Amazon CloudFormation and Terraform for defining and building your cloud architecture and infrastructure. For a good comparison on Amazon CloudFormation and Terraform check out Coin Graham’s blog on the matter: AWS CFT vs. Terraform: Advantages and Disadvantages.

If you are already familiar with Terraform, then you may have encountered a recent change to the way remote state is handled, starting with Terraform v0.9. Continue reading to find out more about migrating Terraform Remote State to a “Backend” in Terraform v.0.9+.

First off… if you are unfamiliar with what remote state is check out this page.

Remote state is a big ol’ blob of JSON that stores the configuration details and state of your Terraform configuration and infrastructure that has actually been deployed. This is pretty dang important if you ever plan on changing your environment (which is “likely”, to put it lightly) and especially important if you want to have more than one person managing/editing/maintaining the infrastructure, or if you have even the most basic rationale as it pertains to backup and recovery.

Terraform supports almost a dozen backend types (as of this writing) including:

  • Artifactory
  • Azure
  • Consul
  • Etcd
  • Gcs
  • Http
  • Manta
  • S3
  • Swift
  • Terraform Enterprise (AKA: Atlas)

 

Why not just keep the Terraform state in the same git repo I keep the Terraform code in?

You also don’t want to store the state file in a code repository because it may contain sensitive information like DB passwords, or simply because the state is prone to frequent changes and it might be easy to forget to push those changes to your git repo any time you run Terraform.

So, what happened to terraform remote anyway?

If you’re like me, you probably run the la version of HashiCorp’s Terraform tool as soon as it is available (we actually have a hook in our team Slack channel that notifies us when a new version is released). With the release of Terraform v.0.9 last month, we were endowed with the usual heaping helping of excellent new features and bug-fixes we’ve come to expect from the folks at HashiCorp, but were also met with an unexpected change in the way remote state is handled.

Unless you are religious about reading the release notes, you may have missed an important change in v.0.9 around the remote state. While the release notes don’t specifically call out the removal (not even deprecation, but FULL removal) of the prior method (e.g. with Terraform remote config, the Upgrade Guide specifically calls out the process in migrating from the legacy method to the new method of managing remote state). More specifically, they provide a link to a guide for migrating from the legacy remote state config to the new backend system. The steps are pretty straightforward and the new approach is much improved over the prior method for managing remote state. So, while the change is good, a deprecation warning in v.0.8 would have been much appreciated. At least it is still backwards compatible with the legacy remote state files (up to version 0.10), making the migration process much less painful.

Prior to v.0.9, you may have been managing your Terraform remote state in an S3 bucket utilizing the Terraform remote config command. You could provide arguments like: backend and backend-config to configure things like the S3 region, bucket, and key where you wanted to store your remote state. Most often, this looked like a shell script in the root directory of your Terraform directory that you ran whenever you wanted to initialize or configure your backend for that project.

Something like…

Terraform Legacy Remote S3 Backend Configuration Example

#!/bin/sh
export AWS_PROFILE=myprofile
terraform remote config \
--backend=s3 \
--backend-config="bucket=my-tfstates" \
--backend-config="key=projectX.tfstate" \
--backend-config="region=us-west-2"

This was a bit clunky but functional. Regardless, it was rather annoying having some configuration elements outside of the normal terraform config (*.tf) files.

Along came Terraform v.0.9

The introduction of Terraform v.0.9 with its new fangled “Backends” makes things much more seamless and transparent.  Now we can replicate that same remote state backend configuration with a Backend Resource in a Terraform configuration like so:

Terraform S3 Backend Configuration Example

terraform {
  backend "s3" {
    bucket = "my-tfstates"
    key    = "projectX.tfstate"
    region = "us-west-2"
  }
}

A Migration Example

So, using our examples above let’s walk through the process of migrating from a legacy “remote config” to a “backend”.  Detailed instructions for the following can be found here.

1. (Prior to upgrading to Terraform v.0.9+) Pull remote config with pre v.0.9 Terraform

> terraform remote pull
Local and remote state in sync

2. Backup your terraform.tfstate file

> cp .terraform/terraform.tfstate 
/path/to/backup

3. Upgrade Terraform to v.0.9+

4. Configure the new backend

terraform {
  backend "s3" {
    bucket = "my-tfstates"
    key    = "projectX.tfstate"
    region = "us-west-2"
  }
}

5. Run Terraform init

> terraform init
Downloading modules (if any)...
 
Initializing the backend...
New backend configuration detected with legacy remote state!
 
Terraform has detected that you're attempting to configure a new backend.
At the same time, legacy remote state configuration was found. Terraform will
first configure the new backend, and then ask if you'd like to migrate
your remote state to the new backend.
 
 
Do you want to copy the legacy remote state from "s3"?
  Terraform can copy the existing state in your legacy remote state
  backend to your newly configured backend. Please answer "yes" or "no".
 
  Enter a value: no
  
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
 
Terraform has been successfully initialized!
 
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
 
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

6. Verify the new state is copacetic

> terraform plan
 
...
 
No changes. Infrastructure is up-to-date.
 
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.

7.  Commit and push

In closing…

Managing your infrastructure as code isn’t rocket science, but it also isn’t trivial.  Having a solid understanding of cloud architectures, the Well Architected Framework, and DevOps best practices can greatly impact the success you have.  A lot goes into architecting and engineering solutions in a way that maximizes your business value, application reliability, agility, velocity, and key differentiators.  This can be a daunting task, but it doesn’t have to be!  2nd Watch has the people, processes, and tools to make managing your cloud infrastructure as code a breeze! Contact us today to find out how.

 

— Ryan Kennedy, Principal Cloud Automation Architect, 2nd Watch

 


Ransomware Attack Leaves Some Companies WannaCrying Over Technical Debt

The outbreak of a virulent strain of ransomware, alternately known as WannaCry or WannaCrypt, is finally winding down. A form of malware, the WannaCry attack exploited certain vulnerabilities in Microsoft Windows and infected hundreds of thousands of Windows computers worldwide.  As the dust begins to settle, the conversation inevitably turns to what could have been done to prevent it.

The first observation is that most organizations could have been protected simply by following best practices—most notably, the regular installation of known security and critical patches that help to minimize vulnerabilities. WannaCry was not an exotic “zero day” incident. The patch for the underlying vulnerabilities (MS17-010) has been available since March. Companies like 2nd Watch maintain a regular patch schedule to protect their systems from these and similar attacks. It should be noted that due to the prolific nature of this malware and the active attack vectors, 2nd Watch is requiring that all Windows systems must be patched by 5/31/2017.

Other best practices include:

  • Maintaining support contracts for out-of-date operating systems
  • Enabling firewalls, in addition to intrusion detection and prevention systems
  • Proactively monitoring and validating traffic going in and out of the network
  • Implementing security mechanisms for other points of entry attackers can use, such as email and websites
  • Deploying application control to prevent suspicious files from executing in addition to behavior monitoring that can thwart unwanted modifications to the system
  • Employing data categorization and network segmentation to mitigate further exposure and damage to data
  • Backing up important data. This is the single, most effective way of combating ransomware infection. However, organizations should ensure that backups are appropriately protected or stored off-line so that attackers can’t delete them.

 

The importance of regularly scheduled patching and keeping systems up-to-date cannot be overemphasized. It may not be sexy, but it is highly effective.

All of these recommendations seem simple enough, but why did the outbreak spread so quickly if the vulnerabilities were known and patches were readily available? It spread because the patches were released for currently supported systems, but the vulnerability has been present in all versions of Windows dating back to Windows XP. For these older systems – no longer supported by Microsoft but still widely used – the patches weren’t there in the first place. One of the highest profile victims, Britain’s National Health Service, discovered that 90 percent of NHS trusts run at least one Windows XP device, an operating system Microsoft first introduced in 2001 and hasn’t supported since 2014. In fact, it was only because of the high-profile nature of this malware that Microsoft took the rare step this week of publishing a patch for Windows XP, Windows Server 2003 and Windows 8.

This brings us to the challenging topic of “technical debt”—the extra cost and effort to continue using older technology. The WannaCry/WannaCrypt outbreak is simply the most recent teachable moment about those costs.

A big benefit of moving to cloud computing is its ability to help rid one’s organization of technical debt. By migrating workloads into the cloud, and even better, by evolving those workloads into modern, cloud-native architectures, the issue of supporting older servers and operating systems is minimized. As Gartner pointed out in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide, through 2018, the cloud managed service market will remain relatively immature, and more than 75% of fully successful implementations will be delivered by highly skilled, forward-looking, boutique managed service providers with a cloud-native, DevOps-centric service delivery approach, like 2nd Watch.  A free download of the report can be found here.

Partners like 2nd Watch can also help reduce your overall management cost by tailoring solutions to manage your infrastructure in the cloud. The best practices mentioned above can be automated in many environments– regular patching, resource isolation, traffic monitoring, etc. – are all done for you so you can focus on your business.

Even more important, companies like 2nd Watch help ensure the ongoing optimization of your workloads, both from a cost and a performance point of view. The life-cycle of optimization and modernization of your cloud environments is perhaps the single grea mechanism to ensure that you never take on and retain high levels of technical debt.

 

-John Lawler, Sr Product Manager


2nd Watch Meets Customer Demands and Prepares for Continued Growth and Acceleration with Amazon Aurora

The Product Development team at 2nd Watch is responsible for many technology environments that support our software and solutions—and ultimately, our customers. These environments need to be easily built, maintained, and kept in sync. In 2016, 2nd Watch performed an analysis on the amount of AWS billing data that we had collected and the number of payer accounts we had processed over the course of the previous year.  Our analysis showed that these measurements had more than tripled from 2015 and projections showed that we will continue to grow at the same, rapid pace with AWS usage and client onboarding increasing daily. Knowing that the storage of data is critical for many systems, our Product Development team underwent an evaluation of the database architecture used to house our company’s billing data—a single SQL Server instance running a Web edition of SQL Server with the maximum number of EBS volumes attached.

During the evaluation, areas such as performance, scaling, availability, maintenance and cost were considered and deemed most important for future success. The evaluation revealed that our current billing database architecture could not meet the criteria laid out to keep pace with growth.  Considerations were made to increase the storage capacity by one VM to the maximum family size or potentially upgrade to MS SQL Enterprise. In either scenario, the cost of the MS SQL instance doubled.  The only option for scaling without substantially increasing our cost was to scale vertically, however, to do so would result in diminishing performance gains. Maintenance of the database had become a full-time job that was increasingly difficult to manage.

Ultimately, we chose the cloud-native solution, Amazon Aurora, for its scalability, low-risk, easy-to-use technology.  Amazon Aurora is a MySQL relational database that provides speed and reliability while being delivered at a lower cost. It offers greater than 99.99% availability and can store up to 64TB of data. Aurora is self-healing and fully managed, which, along with the other key features, made Amazon Aurora an easy choice as we continue to meet the AWS billing usage demands of our customers and prepare for future growth.

The conversion from MS SQL to Amazon Aurora was successfully completed in early 2017 and, with the benefits and features that Amazon Aurora offers, many gains were made in multiple areas. Product Development can now reduce the complexity of database schemas because of the way Aurora stores data. For example, a database with one hundred tables and hundreds of stored procedures was reduced to one table with 10 stored procedures. Gains were made in performance as well. The billing system produces thousands of queries per minute and Amazon Aurora handles the load with the ability to scale to accommodate the increasing number of queries. Maintenance of the Amazon Aurora system is now virtually managed. Tasks such as database backups are automated without the complicated task of managing disks. Additionally, data is copied across six replicas in three availability zones which ensures availability and durability.

With Amazon Aurora, every environment is now easily built and setup using Terraform. All infrastructure is automatically setup—from the web tier to the database tier—with Amazon CloudWatch logs to alert the company when issues occur. Data can easily be imported using automated processes and even anonymized if there is sensitive data or the environment is used to demo to our customers. With the conversion of our database architecture from a single MS SQL Service instance to Amazon Aurora, our Product Development team can now focus on accelerating development instead of maintaining its data storage system.