Mind the Gap! The Leap from Legacy to Modern Applications 

Legacy to Modern Applications | modernizing legacy applications

Most businesses today have evaluated their options for application modernization. Planned movement to the cloud happened ahead of schedule, driven by the need for rapid scalability and agility in the wake of COVID-19.

Legacy applications already rehosted or replatformed in the cloud saw increased load, highlighting painful inefficiencies in scalability and sometimes even causing outages. Your business has likely already taken some first steps in app modernization and updating legacy systems. 

Of the seven options to modernize with legacy systems outlined by Gartner, 2nd Watch commonly works with clients who have already successfully rehosted and replatformed applications. To a lesser extent, we see mainframe applications encapsulated in a modern RESTful API or replaced altogether. Businesses frequently take those first steps in their digital transformation but find themselves stuck crossing the gap to a fully modern application. 

What are common issues and solutions businesses face as they move away from outdated technologies and progress towards fully modern applications? 

Keeping the Goal in Mind 

Overcoming the inertia to begin a modernization project is often a lengthy process, requiring several months or as much as a year or more to complete the first phases. Development teams require training, thorough and careful planning must occur, and unforeseen challenges are encountered and overcome. Through it all, the needs of the business never slow down, and the temptation to halt or dramatically slow legacy modernization efforts after the initial phases of modernization can be substantial. 

No matter what the end state of the modernization journey looks like, it can be helpful to keep it at the forefront of the development team’s minds. In today’s remote and hybrid working environment, that’s not as easy as keeping a whiteboard or poster in a room. Sprint ceremonies should include a brief reminder of long-term business goals, especially for backlog or sprint reviews. Keep the team invested in the business and technical reasons and the question “why modernize legacy applications” at the forefront of their minds. Most importantly, solicit their feedback on the process required to accomplish the long-term strategic goals of the business. 

With the goal firmly in your development team’s minds, it’s time to tackle tactics in migrating from legacy apps to newer systems. What are some of these common stumbling blocks on the road to refactoring and rearchitecting legacy software? 

(Related article: Rehost vs Refactor vs Replatform | AppMod Essentials) 

Refactoring 

Refactoring an application can encompass a broad set of areas. Refactoring is sometimes as straightforward as reducing technical debt, or it can be as complex as breaking apart a monolithic application into smaller services. In 2nd Watch’s experience, some common issues when refactoring running applications include: 

  • Limited knowledge of cloud-based architectural patterns.
    Even common architectures like 2- and 3-tier applications require some legacy code changes when an application has moved from a data center to a cloud service provider or among cloud service providers. Where an older application may have hardcoded IP addresses or DNS, a modern approach to accessing application tiers would use environment variables configured at runtime, pointing at load balancers. 
  • Lack of telemetry and observability.
    Development teams are frequently hesitant to make changes quickly because there are too many unknowns in their application. Proper monitoring of known unknowns (metrics) and unknown unknowns (observability) can demystify the impact of refactoring. For more context around the types of unknowns and how to work with them in an application, Charity Majors frequently writes on the topic. 
  • Lack of thorough automated tests.
    A lack of automated tests also slows the ability to make changes because developers cannot anticipate what their changes might break. Improved telemetry and observability can help, but automated testing is the other side of the equation. Tools like Codecov can initially help improve test coverage, but unless carefully attended, incentivizing a percentage of test coverage across the codebase can lead to tests that do not thoroughly cover all common use cases. Good unit tests and integration testing can halt problems before they even start. 
  • No blueprint for optimal refactoring.
    Without a clear blueprint for understanding what an optimally refactored app looks like, development and information technology teams can become frustrated or unclear about their end goals. Heroku’s Twelve-Factor App methodology is one commonly used framework for crafting or refactoring modern applications. It has the added benefit of being applicable to many deployment models – single- or multiple-server, containers, or serverless. 

Rearchitecting

Rearchitecting an application to leverage better capabilities, such as those found in a cloud service provider’s Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) options, may present some challenges. The most common challenge 2nd Watch encounters with clients is not fully understanding the options available in modern environments. Older applications are the product of their time and typically were built optimally for the available technology and needs. However, when rearchitecting those applications, sometimes development teams either don’t know or don’t have details about better options that may be available. 

Running a MySQL database on the same machine as the rest of the monolithic application may have made sense when initially writing the application. Today, many applications can run more cheaply, more securely, and with the same or better performance using a combination of cloud storage buckets, managed caches like Redis or Memcached, and secrets managers. These consumption-based cloud options tend to be significantly cheaper than managed databases or databases running on cloud virtual machines. Scaling automatically with end-user demand and reduced management overhead are additional benefits of software modernization. 

Rearchitecting an application can also be frustrating for experienced systems administrators tasked with maintaining and troubleshooting production applications. For example, moving from VMs to containers introduces an entirely different way of dealing with logs. Sysadmins must forward them to a log aggregator instead of storing them on disk. Autoscaling a service can mean the difference between identifying which instances – of potentially dozens or hundreds – had an issue instead of a small handful of them. Application modernization impacts every person involved with the long-term success of that application, not just developers and end-users. 

Conclusion 

Application Modernization is a long-term strategic activity, not a short-term tactical activity. Over time, you will realize the benefits of the lower total cost of ownership (TCO), increased agility, and faster time to market. Recognizing and committing to the future of your business will help you overcome the short- and mid-term challenges of app modernization. 

Engaging a trusted partner to accelerate your app modernization journey and lead the charge across that gap is a powerful strategy to overcome some of the highlighted problems. It can be difficult to overcome a challenge with the same mindset that led to creating that challenge. An influx of different ideas and experiences can be the push development teams need to reach the next level for a business. 

If you’re wondering how to modernize legacy applications and are ready to work with a trusted advisor that can help you cross that gap, 2nd Watch will meet you wherever you are in your journey. Contact us to schedule a discussion of your goals, challenges, and how we can help you reach the end game of modern business applications. 

Michael Gray, 2nd Watch Senior Cloud Consultant 


3 Advantages to Embracing the DevOps Movement (Plus Bonus Pipeline Info!)

What is DevOps?

As a result of the increase in cloud adoption across all industries, understanding practices and tools that help organizations’ software run efficiently is essential to how their cloud environment and organization operate. However, many companies do not have the knowledge or expertise needed for success. In fact, Puppet’s 2021 State of DevOps Report found that while 2 in 3 respondents report using the public cloud, only 1 in 4 use the cloud to its full potential.

Enter the DevOps movement

The concept of DevOps combines development and operations to encourage collaboration, embrace automation, and speed up the deployment process. Historically, development and operations teams worked independently, leading to inefficiencies and inconsistencies in objectives and department leadership. DevOps is the movement to eliminate these roadblocks and bring the two communities together to transform how their software operates.

According to a 2020 Atlassian survey, 99% of developers & IT decision-makers say DevOps has positively impacted their organization. Benefits include helping advance their career, and better and faster deliverables. Given the favorable outcome for these developers and IT decision-makers, adopting DevOps tools and practices is a no-brainer. But here are three more advantages to embracing the DevOps movement:

1. Speed

Practices like microservices and continuous delivery allow your business operations to move faster, as your operations and development teams can innovate for customers more quickly, adapt to changing markets, and efficiently drive business results. Additionally, continuous integration and continuous delivery (CI/CD) automate the software release process for fast and continuous software delivery. A quick release process will allow you to release new features, fix bugs, respond to your customers’ needs, and ultimately, provide your organization with a competitive advantage.

2. Security

While DevOps focuses on speed and agile software development, security is still of high priority in a DevOps environment. Tools such as automated compliance policies, fine-grained controls, and configuration management techniques will help you reap the speed and efficiencies provided by DevOps while maintaining control and compliance of your environment.

3. Improved Collaboration

DevOps is more than just technical practices and tools. A complete DevOps transformation involves adopting cultural values and organizational practices that increase collaboration and improve company culture. The DevOps cultural model emphasizes values like ownership and accountability, which work together to improve company culture. As development and operations teams work closely together, their collaboration reduces inefficiencies in their workflows. Additionally, collaboration entails succinctly communicating roles, plans, and goals. The State of DevOps Report also found that clarity of purpose, mission and operating context seem to be strongly associated with highly evolved organizations.

In short, teams who adopt DevOps practices can improve and streamline their deployment pipeline.

What is a DevOps Pipeline?

The term “DevOps Pipeline” is used to describe the set of automated processes and tools that allow developer and operations teams to implement, test, and deploy code to a production environment in a structured and organized manner.

A DevOps pipeline may look different or vary from company to company, but there are typically eight phases: plan, code, build, test, release, deploy, operate, and monitor. When developing a new application, a DevOps pipeline ensures that the code runs smoothly. Once written, various tests are run on the code to flush out potential bugs, mistakes, or any other possible errors. After building the code and running the tests for proper performance, the code is ready for deployment to external users.

A significant characteristic of a DevOps pipeline is it is continuous, meaning each function occurs on an ongoing basis. The most vital one, which was mentioned earlier, is CI/CD. CI, or continuous integration, is the practice of automatically and continuously building and testing any changes submitted to an application. CD, or continuous delivery, extends CI by using automation to release software frequently and predictably with the click of a button. CD allows developers to perform a more comprehensive assessment of updates to confirm there are no issues.

Other “continuous” DevOps practices include:

  • Continuous deployment: This practice goes beyond continuous delivery (CD). It is an entirely automated process that requires no human intervention, eliminating the need for a “release day.”
  • Continuous feedback: Applying input from customers and stakeholders, and systematic testing and monitoring code in the pipeline, allows developers to implement changes faster, leading to greater customer satisfaction.
  • Continuous testing: A fundamental enabler of continuous feedback. Performing automated tests on the code throughout the pipeline leads to faster releases and a higher quality product.
  • Continuous monitoring: Another component of continuous feedback. Use this practice to continuously assess the health and performance of your applications and identify any issues.
  • Continuous operations: Use this practice to minimize or eliminate downtime for your end users through efficiently managing hardware and software changes.

 Embrace the DevOps Culture

We understand that change is not always easy. However, through our Application Modernization & DevOps Transformation process, 2nd Watch can help you embrace and achieve a DevOps culture.

From a comprehensive assessment that measures your current software development and operational maturity to developing a strategy for where and how to apply different DevOps approaches to ongoing management and support, we will be with you every step of the way. Following is what a typical DevOps transformation engagement with us looks like:

Phase 0: Basic DevOps Review

  • DevOps and assessment overview delivered by our Solutions Architects

Phase 1: Assessment & Strategy

  • Initial 2-4 week engagement to measure your current software development and operational maturity
  • Develop a strategy for where and how to apply DevOps approaches

Phase 2: Implementation

Phase 3: Onboarding to Managed Services

  • 1-2 week onboarding to 2nd Watch Managed DevOps service and integration of your operations team and tools with ours

Phase 4: Managed DevOps

  • Ongoing managed service, including monitoring, security, backups, and patching
  • Ongoing guidance and coaching to help you continuously improve and increase the use of tooling within your DevOps teams

Getting Started with DevOps

While companies may understand the business benefits derived from DevOps, 2nd Watch has the knowledge and expertise to help accelerate their digital transformation journey. 2nd Watch is a Docker Authorized Consulting Partner and has earned the AWS DevOps Competency for technical proficiency, leadership, and proven success in helping customers adopt the latest DevOps principles and technologies. Contact us today to get started.

-Tessa Foley, Marketing

 


Standardizing & Automating Infrastructure Development Processes

Introduction

Let’s start with a small look at the current landscape of technology and how we arrived here. There aren’t very many areas of tech that have not been, or are not currently, in a state of fluctuation. Everything from software delivery vehicles and development practices, to infrastructure creation has experienced some degree of transformation over the past several years. From VMs to Containers, it seems like almost every day the technology tool belt grows a little bigger, and our world gets a little better (though perhaps more complex) due to these advancements. For me, this was incredibly apparent when I began to delve into configuration management which later evolved into what we now call “infrastructure as code”.

The transformation of the development process began with simple systems that we once used to manage a few machines (like bash scripts or Makefiles) which then morphed into more complex systems (CF Engine, Puppet, and Chef) to manage thousands of systems. As configuration management software became more mature, engineers and developers began leaning on them to do more things. With the advent of hypervisors and the rise of virtual machines, it was only a short time before hardware requests changed to API requests and thus the birth of infrastructure as a service (IaaS). With all the new capabilities and options in this brave new world, we once again started to lean on our configuration management systems—this time for provisioning, and not just convergence.

Provisioning & Convergence

I mentioned two terms that I want to clarify; provisioning and convergence. Say you were a car manufacturer and you wanted to make a car. Provisioning would be the step in which you request the raw materials to make the parts for your automobile. This is where we would use tools like Terraform, CloudFormation, or Heat. Whereas convergence is the assembly line by which we check each part and assemble the final product (utilizing config management software).

By and large, the former tends to be declarative with little in the way of conditionals or logic, while the latter is designed to be robust and malleable software that supports all the systems we run and plan on running. This is the frame for the remainder of what we are going to talk about.

By separating the concerns of our systems, we can create a clear delineation of the purpose for each tool so we don’t feel like we are trying to jam everything into an interface that doesn’t have the most support for our platform or more importantly our users. The remainder of this post will be directed towards the provisioning aspect of configuration management.

Standards and Standardization

These are two different things in my mind. Standardization is extremely prescriptive and can often seem particularly oppressive to professional knowledge workers, such as engineers or developers. It can be seen as taking the innovation away from the job. Whereas standards provide boundaries, frame the problem, and allow for innovative ways of approaching solutions. I am not saying standardization in some areas is entirely bad, but we should let the people who do the work have the opportunity to grow and innovate in their own way with guidance. The topic of standards and standardization is part of a larger conversation about culture and change. We intend to follow up with a series of blog articles relating to organizational change in the era of the public cloud in the coming weeks.

So, let’s say that we make a standard for our new EC2 instances running Ubuntu. We’ll say that all instances must be running the la official Canonical Ubuntu 14.04 AMI and must have these three tags; Owner, Environment, and Application. How can we enforce that in development of our infrastructure? On AWS, we can create AWS Config Rules, but that is reactive and requires ad-hoc remediation. What we really want is a more prescriptive approach bringing our standards closer to the development pipeline. One of the ways I like to solve this issue is by creating an abstraction. Say we have a terraform template that looks like this:

# Create a new instance of the la Ubuntu 14.04 on an
provider "aws" { region = "us-west-2"
}

data "aws_ami" "ubuntu" { most_recent = true

filter {
name	= "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}

filter {
name	= "virtualization-type" values = ["hvm"]
}

owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" { ami	=
"${data.aws_ami.ubuntu.id}" instance_type = "t2.micro"

tags {
Owner	= "DevOps Ninja" Environment = "Dev" Application = "Web01"
}
}

This would meet the standard that we have set forth, but we are relying on the developer or engineer to adhere to that standard. What if we enforce this standard by codifying it in an abstraction? Let’s take that existing template and turn it into a terraform module instead.

Module

# Create a new instance of the la Ubuntu 14.04 on an

variable "aws_region" {} variable "ec2_owner" {} variable "ec2_env" 
{} variable "ec2_app" {}
variable "ec2_instance_type" {}

provider "aws" {
region = "${var.aws_region}"
}

data "aws_ami" "ubuntu" { most_recent = true

filter {
name	= "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}

filter {
name	= "virtualization-type" values = ["hvm"]
}

owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" { ami	=
"${data.aws_ami.ubuntu.id}" instance_type =
"${var.ec2_instance_type}"

tags {
Owner	= "${var.ec2_owner}" Environment = "${var.ec2_env}" Application = 
"${var.ec2_app}"
}
}

Now we can have our developers and engineers leverage our tf_ubuntu_ec2_instance module.

New Terraform Plan

module "Web01" { source =
"git::ssh://git@github.com/SomeOrg/tf_u buntu_ec2_instance"

aws_region = "us-west-2" ec2_owner = "DevOps Ninja" ec2_env	= "Dev"
ec2_app	= "Web01"
}

This doesn’t enforce the usage of the module, but it does create an abstraction that provides an easy way to maintain standards without a ton of overhead, it also provides an example for further creation of modules that enforce these particular standards.

This leads us into another method of implementing standards but becomes more prescriptive and falls into the category of standardization (eek!). One of the most underutilized services in the AWS product stable has to be Service Catalog.

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

The Interface

Once we have a few of these projects in place (e.g. a service catalog or a repo full of composable modules for infrastructure that meet our standards) how do we serve them out? How you spur adoption of these tools and how they are consumed can be very different depending on your organization structure. We don’t want to upset workflow and how work gets done, we just want it to go faster and be more reliable. This is what we talk about when we mention the interface. Whichever way work flows in, we should supplement it with some type of software or automation to link those pieces of work together. Here are a few examples of how this might look (depending on your organization):

1.) Central IT Managed Provisioning

If you have an organization that manages requests for infrastructure, having this new shift in paradigm might seem daunting. The interface in this case is the ticketing system. This is where we would create an integration with our ticketing software to automatically pull the correct project from service catalog or module repo based on some criteria in the ticket. The interface doesn’t change but is instead supplemented by some automation to answer these requests, saving time and providing faster delivery of service.

2.) Full Stack Engineers

If you have engineers that develop software and the infrastructure that runs their applications this is the easiest scenario to address in some regards and the hardest in others. Your interface might be a build server, or it could simply be the adoption of an internal open source model where each team develops modules and shares them in a common place, constantly trying to save time and not re-invent the wheel.

Supplementing with software or automation can be done in a ton of ways. Check out an example Kelsey Hightower wrote using Jira.

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” – John Gall

All good automation starts out with a manual and well-defined process. Standardizing & automating infrastructure development processes begins with understanding how our internal teams can be best organized.  This allows us to efficiently perform work before we can begin automating. Work with your teammates to create a value stream map to understand the process entirely before doing anything towards the efforts of automating a workflow.

With 2nd Watch designs and automation you can deploy quicker, learn faster and modify as needed with Continuous Integration / Continuous Deployment (CI/CD). Our Workload Solutions transform on-premises workloads to digital solutions in the public cloud with next generation products and services.  To accelerate your infrastructure development so that you can deploy faster, learn more often and adapt to customer requirements more effectively, speak with a 2nd Watch cloud deployment expert today.

– Lars Cromley, Director of Engineering, Automation, 2nd Watch