Evolving Operations to Maximize AWS Cloud Native Services

As a Practice Director of Managed Cloud Services, my team and I see well-intentioned organizations fall victim to this very common scenario… Despite the business migrating from its data center to Amazon Web Services (AWS), its system operations team doesn’t make adjustments for the new environment. The team attempts to continue performing the same activities they did when their physical hardware resided in a data center or at another hosting provider.

The truth is, that modernizing your monolithic applications and infrastructure requires new skill sets, knowledge, expertise, and understanding to get desired results. Unless you’re a sophisticated, well-funded, start-up, most established organizations don’t know where to begin after the migration is complete. The transition from deploying legacy software in your own data center, to utilizing Elastic Kubernetes Service (EKS) and micro-services, while deploying code through an automated Continuous Integration and Continuous Delivery (CI/CD) pipeline, is a whole new ballgame. Not to mention how to keep it functioning after it is deployed.

In this article, I’m providing some insight on how to overcome the stagnation that hits post-migration. With forethought, AWS understanding, and a reality check on your internal capabilities, organizations can thrive with cloud-native services. At the same time, kicking issues downstream, maintaining inefficiencies, and failing to address new system requirements will compromise the ROI and assumed payoffs of modernization.

Is Your Team Prepared?

Sure, going serverless with Lambda might be all the buzz right now, but it’s not something you can effectively accomplish overnight. Running workloads on cloud-native services and platforms requires a different way of operating. New operational demands require that your internal teams are equipped with these new skill sets. Unfortunately, a team that may have mastered the old data center or dedicated hosting provider environment, may not be able to jump in on AWS.

The appeal of AWS is the great flexibility to drive your business and solve unique challenges.  However, because of the ability to provision and decommission on demand, it also introduces new complexities. If these new challenges are not addressed early on, you will definitely see friction between teams which can damage collaboration and adoption, the potential for system sprawl increases, and cost overruns can compromise the legitimacy and longevity of modernization.

Due to the high cost and small talent pool of technically efficient cloud professionals, many organizations struggle to nab the attention of these highly desired employees. Luckily, modern cloud-managed service providers can help you wade through the multitude of services AWS introduces. With a trusted and experienced partner by your side, businesses are able to gain the knowledge necessary to drive business efficiencies and solve unique challenges. Depending on the level of interaction, existing team members may be able to level up to better manage AWS growth going forward. In the meantime, involving a third-party cloud expert is a quick and efficient way to make sure post-migration change management evolves with your goals, design, timeline, and promised outcomes.

Are You Implementing DevOps?

Modern cloud operations and optimizations address the day two necessities that go into the long-term management of AWS. DevOps principles and automation need to be heavily incorporated into how the AWS environment operates. With hundreds of thousands of distinct prices and technical combinations, even the most experienced IT organizations can get overwhelmed.

Consider traditional operations management versus cloud-based DevOps. One is a physical hardware deployment that requires logging into the system to perform configurations, and then deploying software on top. It’s slow, tedious, and causes a lag for developers as they wait for feature delivery, which negatively impacts productivity. Instead of system administrators performing monthly security patching, and having to log into each instance separately, a modern cloud operation can efficiently utilize a pipeline ­with infrastructure as code. Now, you can update your configuration files to use a new image and then use infrastructure automation to redeploy. This treats each one as an ephemeral instance, minimizing any friction or delay on the developer teams.

This is just one example of how DevOps can and should be used to achieve strong availability, agility, and profitability. Measuring DevOps with the CALMS model provides a guideline for addressing the five fundamental elements of DevOps: Culture, Automation, Lean, Measurement, and Sharing. Learn more about DevOps in our eBook, 7 Major Roadblocks in DevOps Adoption and How to Address Them.

Do You Continue With The Same Behavior?

Monitoring CPU, memory, and disk at the traditional thresholds used on legacy hardware are not necessarily appropriate when utilizing AWS EC2. To achieve the financial and performance benefits of the cloud, you purposely design systems and applications to use and pay for the number of resources required. As you increasingly deploy new cloud-native technology, such as Kubernetes and serverless operations, require that you monitor in different ways so as to reduce an abundance of unactionable alerts that eventually become noise.

For example, when running a Kubernetes cluster, you should implement monitoring that alerts on desired pods. If there’s a big difference between the number of desired pods and currently running pods, this might point to resource problems where your nodes lack the capacity to launch new pods. With a modern managed cloud service provider, cloud operations engineers receive the alert and begin investigating the cause to ensure uptime and continuity for application users. With fewer unnecessary alerts and an escalation protocol for the appropriate parties, triage of the issue can be done more quickly. In many cases remediation efforts can be automated, allowing for more efficient resource allocation.

How Are You Cutting Costs?

Many organizations initiate cloud migration and modernization to gain cost-efficiency. Of course, these financial benefits are only accessible when modern cloud operations are fully in place.

Considering that anyone can create an AWS account but not everyone has visibility or concerns for budgetary costs, it can result in costs exceeding expectations quickly. This is where establishing a strong governance model and expanding automation can help you to achieve your cost-cutting goals. You can limit instance size deployment using IAM policies to insure larger, more expensive instances are not unnecessarily utilized. Another cost that can quickly grow without the proper controls is your S3 storage. Enabling policies to have objects expire and automatically be deleted can help to curb an explosion in storage costs. Enacting policies like these to control costs requires that your organization take the time to think through the governance approach and implement it.

Evolving in the cloud can reduce computing costs by 40-60% while increasing efficiency and performance. However, those results are not guaranteed. Download our eBook, A Holistic Approach to Cloud Cost Optimization, to ensure a cost-effective cloud experience.

How Will You Start Evolving Now?

Time is of the essence when it comes to post-migration outcomes – and the board and business leaders around you will be expecting results. As your organization looks to leverage AWS cloud-native services, your development practices will become more agile and require a more modern approach to managing the environment. To keep up with these business drivers, you need a team to serve as your foundation for evolution.

2nd Watch works alongside organizations to help start or accelerate your cloud journey to become fully cloud native on AWS. With more than 10 years of migrating, operating, and effectively managing workloads on AWS, 2nd Watch can help your operations staff evolve to operate in a modern way with significant goal achievement. Are you ready for the next step in your cloud journey? Contact us and let’s get started.

 


Accelerating Application Development with DevOps

If you moved to the cloud to take advantage of rapid infrastructure deployment and development support, you understand the power of quickly bringing applications to market. Gaining a competitive edge is all about driving customer value fast. Immersing a company in a DevOps transformation is one of the best ways to achieve speed and performance.

In this blog post, we’re building on the insights of Harish Jayakumar, Senior Manager of Application Modernization and Solutions Engineering at Google, and Joey Yore, Manager, and Principal Consultant at 2nd Watch. See how the highest performing teams in the DevOps space are achieving strong availability, agility, and profitability with application development according to key four metrics. Understand the challenges, solutions, and potential outcomes before starting your own DevOps approach to accelerating app development.

Hear Harish and Joey on the 2nd Watch Cloud Crunch podcast, 5 Strategies to Maximize Your Cloud’s Value: Strategy 2 – Accelerating Application Development with DevOps  

What is DevOps?

Beyond the fact that DevOps combines software development (Dev) and IT operations (Ops), DevOps is pretty hard to define. Harish thinks the lack of a clinical, agreed-upon definition is by design. “I think everyone is still learning how to get better at building and operating software.” With that said, he describes his definition of DevOps as, “your software delivery velocity, and the reliability of it. It’s basically a cultural and organizational moment that aims to increase software reliability and velocity.”

The most important thing to remember about a DevOps transformation and the practices and principles that make it possible is culture. At its core, DevOps is a cultural shift. Without embracing, adopting, and fostering a DevOps culture, none of the intended outcomes are possible.

Within DevOps there are five key principles to keep top of mind:

  1. Reduce organizational silos
  2. Accept failure as the norm
  3. Implement gradual changes
  4. Leverage tooling and automation
  5. Measure

Measuring DevOps: DORA and CALMS

Google acquired DevOps Research and Assessment (DORA) in 2018 and relies on the methodology developed from DORA’s annual research to measure DevOps performance. “DORA follows a very strong data-driven approach that helps teams leverage their automation process, cultural changes, and everything around it,” explains Harish. Fundamental to DORA are four key metrics that offer a valid and reliable way to measure the research and analysis of any kind of software delivery performance. These metrics gauge the success of DevOps transformations from ‘low performers’ to ‘elite performers’.

  1. Deployment frequency: How often is the organization successfully released to production
  2. Lead time for changes: The amount of time it takes a commit to get into production
  3. Change failure rate: The percentage of deployments causing a failure in production
  4. Time to restore service: How long it takes to recover from a failure in production

DORA is similar to the CALMS model which addresses the five fundamental elements of DevOps starting with where the enterprise is today and continuing throughout the transformation. CALMS also uses the four key metrics identified by DORA to evaluate DevOps performance and delivery. The acronym stands for:

Culture: Is there a collaborative and customer-centered culture across all functions?

Automation: Is automation being used to remove toil or wasted work?

Lean: Is the team agile and scrappy with a focus on continuous improvement?

Measurement: What, how, and against what benchmarks is data being measured?

Sharing: To what degree are teams teaching, sharing, and contributing to cross-team collaboration?

DevOps Goals: Elite Performance for Meaningful Business Impacts

Based on the metrics above, organizations fall into one of four levels: low, medium, high, or elite performers. The aspiration to achieve elite performance is driven by the significant business impact these teams have on their overall organization. According to Harish, and based on research by the DORA team at Google, “It’s proven that elite performers in the four key metrics are 3.56 times more likely to have a stronger availability practice. There’s a strong correlation between these elite performers and the business impact of the organization that they’re a part of. ”

He goes on to say, “High performers are more agile. We’ve seen 46 times more frequent deployments from them. And it’s more reliable. They are five times more likely to exceed any profitability, market share, or productivity goals on it.” Being able to move quickly enables these organizations to deliver features faster, and thus increase their edge or advantage over competitors.

Focusing on the five key principles of DevOps is critical for going from ideation to implementation at a speed that yields results. High and elite performers are particularly agile with their use of technology. When a new technology is available, DevOps teams need to be able to test, apply, and utilize it quickly. With the right tools, teams are alerted immediately to code breaks and where that code resides. Using continuous testing, the team can patch code before it affects other systems. The results are improved code quality and accelerated, efficient recovery. You can see how each pillar of DevOps – from culture and agility to technology and measurement, feeds into one another to deliver high levels of performance, solid availability, and uninterrupted continuity.

Overcoming Common DevOps Challenges

Because culture is so central to a DevOps transformation, most challenges can be solved through cultural interventions. Like any cultural change, there must first be buy-in and adoption from the top down. Leadership plays a huge role in setting the tone for the cultural shift and continuously supporting an environment that embraces and reinforces the culture at every level. Here are some ways to influence an organization’s cultural transformation for DevOps success.

  • Build lean teams: Small teams are better enabled to deliver the speed, innovation, and agility necessary to achieve across DevOps metrics.
  • Enable and encourage transparency: Joey says, “Having those big siloed teams, where there’s a database team, the development team, the ops team – it’s really, anti-DevOps. What you want to start doing is making cross-functional teams to better aid in knocking down those silos to improve deployment metrics.”
  • Create continuous feedback loops: Among lean, transparent teams there should be a constant feedback loop of information sharing to influence smarter decision making, decrease redundancy, and build on potential business outcomes.
  • Reexamine accepted protocols: Always be questioning the organizational and structural processes, procedures, and systems that the organization grows used to. For example, how long does it take to deploy one line of change? Do you do it repeatedly? How long does it take to patch and deploy after discovering a security vulnerability? If it’s five days, why is it five days? How can you shorten that time? What technology, automation, or tooling can increase efficiency?
  • Measure, measure, measure: Utilize DORAs research to establish elite performance benchmarks and realistic upward goals. Organizations should always be identifying barriers to achievement and continuously improving on measurements toward improvement.
  • Aim for total performance improvements: Organizations often think they need to choose between performance metrics. For example, in order to influence speed, stability may be negatively affected. Harish says, “Elite performers don’t see trade-offs,” and points to best practices like CICD, agile development, and tests, built-in automation, standardized platform and processes, and automated environment provisioning for comprehensive DevOps wins.
  • Work small: Joey says, “In order to move faster, be more agile, and accelerate deployment, you’re naturally going to be working with smaller pieces with more automated testing. Whenever you’re making changes on these smaller pieces, you’re actually lowering your risk for anyone’s deployment to cause some sort of catastrophic failure. And if there is a failure, it’s easy to recover. Minimizing risk per change is a very important component of DevOps.”

Learn more about avoiding common DevOps issues by downloading our eBook, 7 Major Roadblocks in DevOps Adoption and How to Address Them

Ready to Start Your DevOps Transformation?

Both Harish and Joey agree that the best approach to starting your own DevOps transformation is one based on DevOps – start small. The first step is to compile a small team to work on a small project as an experiment. Not only will it help you understand the organization’s current state, but it helps minimize risk to the organization as a whole. Step two is to identify what your organization and your DevOps team are missing. Whether it’s technology and tooling or internal expertise, you need to know what you don’t know to avoid regularly running into the same issues.

Finally, you need to build those missing pieces to set the organization up for success. Utilize training and available technology to fill in the blanks, and partner with a trusted DevOps expert who can guide you toward continuous optimization.

2nd Watch provides Application Modernization and DevOps Services to customize digital transformations. Start with our free online assessment to see how your application modernization maturity compares to other enterprises. Then let 2nd Watch complete a DevOps Transformation Assessment to help develop a strategy for the application and implementation of DevOps practices. The assessment includes analysis using the CALMS model, identification of software development and level of DevOps maturity, and delivering tools and processes for developing and embracing DevOps strategies.

 


Don’t Be Afraid of Learning New DevOps Tooling

DevOps is defined as a set of practices that combines software development (Dev) and information-technology operations (Ops) to shorten the systems development life cycle and provide continuous delivery with high software quality. (DevOps, n.d.) The idea of tools, or tooling, is not mentioned, likely due to misconceptions or their sheer number.

Indeed, there are a lot of DevOps tools that span across a multitude of categories. From application performance monitoring tools to configuration management tools to deployment tools to continuous integration and continuous delivery tools and more, it’s no wonder IT organizations are hesitant to learn new ones.

Misconceptions about DevOps tooling are a common problem in software development and are essential to highlight. When 2nd Watch guides organizations along their DevOps transformation journey, common misconceptions encountered include:

  • “We already have a tool that does that.”
  • “We have too many tools already.”
  • “We need to standardize.”
  • “Our teams aren’t ready for that.”

However, these statements contradict what DevOps represents. Although DevOps practices are not about running a particular set of tools, tooling is undoubtedly an essential component that helps DevOps teams operate and evolve applications quickly and reliably. Tools also help engineers independently accomplish tasks that generally require help from other groups, further increasing a team’s velocity.

Crucial DevOps Practices

Several practices are crucial to a successful DevOps transformation and counter the objections mentioned above.

Experimentation

Humans learn by making mistakes and adjusting their behavior accordingly to improve continuously. IT practitioners do that day-in and day-out when they write code, change a configuration setting, or look at a new dashboard metric. All the above statements work against this idea, hindering experimentation. A new tool may open new avenues of test automation or provide better tracking mechanisms. An experimentation mindset is crucial to the development process for developers to get better at it and improve their understanding of when, where, and how something will or will not fit into their workflow.

Shifting Left

“Shifting left” is a common DevOps practice and is important to understand. Many of the above statements stifle this idea due to siloed, gated workflows. For example, a product’s development workflow starts on the left, then flows along different stages and gates toward the right, eventually making it into production. By moving feedback as far left as possible, implementers are alerted to problems faster and quickly remedy the issues.

Servant Leadership

Servant leadership is critical in handling a changing culture. It is another aspect of shifting left where, rather than “managing” specifics and giving orders, servant leaders encourage and guide decisions toward a common goal. Ultimately, the decision-making process is moving to the left.

One tool to rule them all and, with automation, bind them!

Described here is a specific tool that helps developers be more efficient in their software delivery which, in turn,  drives organizations to embrace the DevOps culture. It is a unified tool that provides a single interface that everyone can understand and to which they can contribute.

GNU Make

GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program’s source files.” – gnu.org

GNU Make generates artifacts from source files by running local commands. It’s prevalent in the UNIX-like (*nix) world, and for a good reason. In the past, developers primarily used GNU Make to build Linux source files and compile C and C++. However, its absolute dominance in that realm made it readily available in nearly every *nix flavor and with minimal variance. It’s a tried-and-true tool that’s relatively simple to use and available everywhere.

The Basics

There are two things needed to use GNU Make.

  • the GNU Make tool
  • a Makefile

Because installation of the tool is outside of the scope of this article, that step will be skipped. However, if a *nix system is used, Make is likely already installed.

Rather than using the GNU Make tool to create an executable, make is strictly used here to set up automation, significantly simplifying the code. Terraform is the tool used to demonstrate how GNU Make works.

The Makefile goes directly into the root of the project…

…and looks like this:

The Makefile

The GNU Make tool will look inside the current working directory of the “Makefile” and use that as its set of instructions. Although the tool will look for Makefile or makefile, it is a firmly accepted standard practice to use the capitalized form.

The Makefile defines a set of rulestargets, and dependencies. Since the Makefile is a wrapper, the targets are omitted and listed as .PHONY, telling make that a file for the rules we will not be output.

Defining Rules
  1. The initrule:

In this example, make is being told that it is not building anything with the rule (.PHONY: init), is defining the rule (init:), and which commands make is to execute when the init rule is executed (terraform init).

The following command is run in the root of the project to execute the rule:

  1. The plan rule

Once the project is initialized, Terraform displays the changes it wants to make to the infrastructure before implementation to verify expectations.  (NOTE: terraform plan prints out a description of what Terraform is going to do, but does not make changes)

.PHONY is used again because, like before, nothing is being created. Here, the init rule is set up as a dependency for the plan rule. In this case, when ]% make plan is executed, the init rule will run first. Any number of these rules can be strung together, helping to make them more atomic. (NOTE: it is not necessary to run init before plan with Terraform.)

With a little more complexity, make can codify important steps in an operation. In the above, an environment variable is required (ENV), then the appropriate workspace is selected before running the plan rule with the appropriate variables set for that environment. (-var-file).

3.     The test rule

Create a simple test rule for the Terraform project:

Here, the code is formatted to canonicalize it, and then the validate Terraform command is run to ensure the templates are valid.

Concepts of Unifying Tools

Central to the idea of a unifying tool are self-documenting source code and the adapter pattern. Each concept is detailed below to provide a better understanding of why they are so important.

Self-Documenting Code

The idea behind self-documenting code is to have code written in a way that’s explanatory to its purpose. Variable names, function names, classes, objects, and methods should all be named appropriately. Volumes of documentation are not needed to describe what a piece of code is doing. As it’s code, the underlying understanding is that it’s executable, so what is read in the code is what will happen when it is executed.

In the above examples, executable documentation for the project was also created by using make as the central “documentation” point. Even when things got more complex with the second plan example, any editor can be used to see the definition for the plan rule.

In cases where Terraform isn’t used often, it isn’t necessary to remember every command, variable, or command-line argument needed to be productive. Which commands are run, and where, can still be seen. And, if more details are required, execute the command in a terminal and read its help information.

The Adapter Design Pattern

In software engineering, the adapter design pattern is a well-known use pattern to allow different inputs into a standard interface. The concept may be better understood through the following analogy.

There are many different types of outlets used throughout the world. Europe uses a round, 2-pronged plug. The United States (US) uses a 2-pronged flat or 3-pronged (2 flat, one round) outlet. One solution for US plugs to work in European outlets is to jigger the outlet itself, allowing for both US and European plugs. However, this “solution” is costly. Instead, an easier and less expensive solution is to use an adapter that understands the inputs of the US plug and converts it to the European inputs that we can then plug into the wall.

This is seen when documentation for command-line tools is accessed using the help argument. Nearly every command-line tool understands the –help switch or help command. It’s ubiquitous.

The earlier examples (init, plan) are more specific to Terraform, but the test rule is not. The test rule is an excellent example of being able to use make as an adapter!

Development testing is a critical aspect of all engineering, including software delivery. Writing tests ensure that any changes introduced into a project don’t cause any unintended problems or bugs.

Every language and many tools have individual testing frameworks—multiple frameworks per language, many times with different commands for each. By using make as the user interface to the project, workflows can stay the same—or very similar—across multiple projects utilizing multiple tools. For example, make test can be run in any project, and the tests for that project will execute, regardless of the language. Python? Golang? .NET? Ruby? make test can be used for each one in every project.

Another example is the init rule in the above Makefile. There’s typically some setup or configuration for every project that needs to happen to get an environment ready for use. The init rule can be used to set up any project, whether for running pip for python or npm for Node.

Conclusion

The fear of tooling burnout is not apocalyptic. By utilizing a wrapper tool like GNU Make to provide a standard interface, many problems that might be encountered when implementing a breadth of tools can be mitigated. However, cultural changes are essential, and tools will not be able to solve them. But a piece of tech like GNU Make can be used to alleviate the perceived notions of “Too many tools.”

Contact 2nd Watch to discuss how we can help further your DevOps transformation.

-by Craig Monson, Principal Cloud Consultant, 2nd Watch


The Importance of Leadership in Moving to a DevOps Culture

Why DevOps?

DevOps is a set of practices that improve the efficiency and effectiveness of IT operations. Utilizing many aspects of agile methodology, DevOps aims to shorten the systems development life cycle and provide continuous improvement. As you consider incorporating DevOps into your operations, understand the effect DevOps has on processes and culture. Successful implementation is about finding the right balance of attention on people, processes, and technology to achieve improvement.

The ultimate goal is continuous improvement through processes and tools. No amount of tooling, automation, or fancy buzz words can cause any greater effect on an organization than transforming their culture, and there’s no other way to do that than to focus on the change.

Understanding What You Are Trying to Accomplish with DevOps

Ask yourself what you are trying to achieve. It may seem obvious, but you may get on the wrong track without thinking about what you want your development and operations teams to achieve.

Often, when clients approach 2nd Watch wanting to incorporate DevOps, they are really asking for automation tools and nothing else. While automation has certain benefits, DevOps goes beyond the benefits of technology to improve processes, help manage change more effectively, and improve organizational culture. Change is difficult. However, implementing a cultural shift is particularly challenging. Often overlooked, cultural change is the greatest pain 2nd Watch consultants encounter when working with companies trying to make substantial organizational changes. Even implementing things as simple as sharing responsibility, configuration management, or version control can cause turmoil!

From IT Management to Leadership

There is a distinction between what it means to be a manager versus being a leader. And, in all industries, being a manager does not necessitate being a good leader.

It’s helpful to consider the progression of those in technical roles to management. Developers and operations personnel are typically promoted to managers because they are competent in their technical position—they excel at their current software development process, configuring a host or operating a Kubernetes cluster. However, as a manager, they’re also tasked with directing staff, which may put them outside of their comfort zone. They are also responsible for pay, time and attendance, morale, and hiring and firing. They likely were not promoted for their people skills but their technical competencies.

Many enterprise organizations make the mistake of believing employees who have outstanding technical skills will naturally excel at people management once they get that promotion. Unfortunately, this mistake breeds many managers who fall short of potential, often negatively affecting corporate culture.

Leading the Change

It’s imperative to understand the critical role leadership plays in navigating the amount of change that will likely occur and in changing the organization’s culture.

Whether you’re a manager or leader matters a lot when you answer the question, “What do I really want out of DevOps?” with, “I want to be able to handle change. Lots and lots of change.”

Better responses would include:

  • “I want our organization to be more agile.”
  • “I want to be able to react faster to the changing market.”
  • “I want to become a learning organization.”
  • “I want to embrace a DevOps culture for continuous improvement.”

The underlying current of these answers is change.

Unfortunately, when bungled management occurs, it’s the people below that pay the price. Those implementing the changes tend to take the brunt of the worst of the change pain. Not only does this cause lower morale, but it can cause a mutiny of sorts. Apathy can affect quality, causing outages. The best employees may jump ship for greener pastures. Managers may give up on culture change entirely and go back to the old ways.

However, there is light at the end of the tunnel. With a bit of effort and determination, you can learn to lead change just as you learned technical skills.

Go to well-known sources on management improvement and change management. Leading Change by John P. Kotter[1]  details the successful implementation of change into an organization. Kotter discusses eight steps necessary to help improve your chances of being successful in changing an organization’s culture:

  1. Establishing a sense of urgency
  2. Creating the guiding coalition
  3. Developing a vision and strategy
  4. Communicating the change vision
  5. Empowering broad-based action
  6. Generating short term wins
  7. Consolidating gains and producing more change
  8. Anchoring new approaches in the culture

It’s all about people. Leaders want to empower their teams to make intelligent, well-informed decisions that align with their organization’s goals. Fear of making mistakes should not impede change.

Mistakes happen. Instead of managers locking their teams down and passing workflows through change boards, leaders can embrace the DevOps movement and foster a culture where their high-performing DevOps team can make mistakes and quickly remedy and learn from them.

Each step codifies what most organizations are missing when they start a transformation: focusing on the change and moving from a manager to a leader.

The 5 Levels of Leadership

Learning the skills necessary to become a great leader is not often discussed when talking about leadership or management positions. We are accustomed to many layers of management and managers sticking to the status quo in the IT industry. But change is necessary, and the best place to start is with ourselves.

The 5 Levels of Leadership by John C. Maxwell[1] is another excellent source of information for self-improvement on your leadership journey:

  • Level 1 – Position: People follow you only because they believe they have to.
  • Level 2 – Permission: People follow you because they want to.
  • Level 3 – Production: People follow you because of what you have done for the organization.
  • Level 4 – People Development: People follow you because of what you have done for them.
  • Level 5 – Pinnacle: People follow because of who you are and what you represent.

Leadership easily fits into these levels, and determining your position on the ladder can help. Not only are these levels applicable to individuals but, since an organization’s culture can revolve around how good or bad their leadership is, this ends up being a mirror into the problems the organization faces altogether.

Conclusion

When transforming to a DevOps culture, it’s essential to understand ways to become a better leader. In turn, making improvements as a leader will help foster a healthy environment in which change can occur. And there’s no better catalyst to becoming a great leader than being able to focus on the change.

2nd Watch collaborates with many different companies just beginning their workflow modernization journey. Contact us to discuss how we can help your organization further adopt a DevOps culture.

-Craig Monson


3 Advantages to Embracing the DevOps Movement (Plus Bonus Pipeline Info!)

What is DevOps?

As a result of the increase in cloud adoption across all industries, understanding practices and tools that help organizations’ software run efficiently is essential to how their cloud environment and organization operate. However, many companies do not have the knowledge or expertise needed for success. In fact, Puppet’s 2021 State of DevOps Report found that while 2 in 3 respondents report using the public cloud, only 1 in 4 use the cloud to its full potential.

Enter the DevOps movement

The concept of DevOps combines development and operations to encourage collaboration, embrace automation, and speed up the deployment process. Historically, development and operations teams worked independently, leading to inefficiencies and inconsistencies in objectives and department leadership. DevOps is the movement to eliminate these roadblocks and bring the two communities together to transform how their software operates.

According to a 2020 Atlassian survey, 99% of developers & IT decision-makers say DevOps has positively impacted their organization. Benefits include helping advance their career, and better and faster deliverables. Given the favorable outcome for these developers and IT decision-makers, adopting DevOps tools and practices is a no-brainer. But here are three more advantages to embracing the DevOps movement:

1. Speed

Practices like microservices and continuous delivery allow your business operations to move faster, as your operations and development teams can innovate for customers more quickly, adapt to changing markets, and efficiently drive business results. Additionally, continuous integration and continuous delivery (CI/CD) automate the software release process for fast and continuous software delivery. A quick release process will allow you to release new features, fix bugs, respond to your customers’ needs, and ultimately, provide your organization with a competitive advantage.

2. Security

While DevOps focuses on speed and agile software development, security is still of high priority in a DevOps environment. Tools such as automated compliance policies, fine-grained controls, and configuration management techniques will help you reap the speed and efficiencies provided by DevOps while maintaining control and compliance of your environment.

3. Improved Collaboration

DevOps is more than just technical practices and tools. A complete DevOps transformation involves adopting cultural values and organizational practices that increase collaboration and improve company culture. The DevOps cultural model emphasizes values like ownership and accountability, which work together to improve company culture. As development and operations teams work closely together, their collaboration reduces inefficiencies in their workflows. Additionally, collaboration entails succinctly communicating roles, plans, and goals. The State of DevOps Report also found that clarity of purpose, mission and operating context seem to be strongly associated with highly evolved organizations.

In short, teams who adopt DevOps practices can improve and streamline their deployment pipeline.

What is a DevOps Pipeline?

The term “DevOps Pipeline” is used to describe the set of automated processes and tools that allow developer and operations teams to implement, test, and deploy code to a production environment in a structured and organized manner.

A DevOps pipeline may look different or vary from company to company, but there are typically eight phases: plan, code, build, test, release, deploy, operate, and monitor. When developing a new application, a DevOps pipeline ensures that the code runs smoothly. Once written, various tests are run on the code to flush out potential bugs, mistakes, or any other possible errors. After building the code and running the tests for proper performance, the code is ready for deployment to external users.

A significant characteristic of a DevOps pipeline is it is continuous, meaning each function occurs on an ongoing basis. The most vital one, which was mentioned earlier, is CI/CD. CI, or continuous integration, is the practice of automatically and continuously building and testing any changes submitted to an application. CD, or continuous delivery, extends CI by using automation to release software frequently and predictably with the click of a button. CD allows developers to perform a more comprehensive assessment of updates to confirm there are no issues.

Other “continuous” DevOps practices include:

  • Continuous deployment: This practice goes beyond continuous delivery (CD). It is an entirely automated process that requires no human intervention, eliminating the need for a “release day.”
  • Continuous feedback: Applying input from customers and stakeholders, and systematic testing and monitoring code in the pipeline, allows developers to implement changes faster, leading to greater customer satisfaction.
  • Continuous testing: A fundamental enabler of continuous feedback. Performing automated tests on the code throughout the pipeline leads to faster releases and a higher quality product.
  • Continuous monitoring: Another component of continuous feedback. Use this practice to continuously assess the health and performance of your applications and identify any issues.
  • Continuous operations: Use this practice to minimize or eliminate downtime for your end users through efficiently managing hardware and software changes.

 Embrace the DevOps Culture

We understand that change is not always easy. However, through our Application Modernization & DevOps Transformation process, 2nd Watch can help you embrace and achieve a DevOps culture.

From a comprehensive assessment that measures your current software development and operational maturity to developing a strategy for where and how to apply different DevOps approaches to ongoing management and support, we will be with you every step of the way. Following is what a typical DevOps transformation engagement with us looks like:

Phase 0: Basic DevOps Review

  • DevOps and assessment overview delivered by our Solutions Architects

Phase 1: Assessment & Strategy

  • Initial 2-4 week engagement to measure your current software development and operational maturity
  • Develop a strategy for where and how to apply DevOps approaches

Phase 2: Implementation

Phase 3: Onboarding to Managed Services

  • 1-2 week onboarding to 2nd Watch Managed DevOps service and integration of your operations team and tools with ours

Phase 4: Managed DevOps

  • Ongoing managed service, including monitoring, security, backups, and patching
  • Ongoing guidance and coaching to help you continuously improve and increase the use of tooling within your DevOps teams

Getting Started with DevOps

While companies may understand the business benefits derived from DevOps, 2nd Watch has the knowledge and expertise to help accelerate their digital transformation journey. 2nd Watch is a Docker Authorized Consulting Partner and has earned the AWS DevOps Competency for technical proficiency, leadership, and proven success in helping customers adopt the latest DevOps principles and technologies. Contact us today to get started.

-Tessa Foley, Marketing

 


Cloud Automation for I.T. Governance, Risk, and Compliance (GRC) in Healthcare

It has been said that the “hero of a successful digital transformation is GRC.” The ISACA website states, “to successfully manage the risk in digital transformation you need a modern approach to governance, risk and regulatory compliance.” For GRC program development, it is important to understand the health information technology resources and tools available to enable long term success.

What is GRC and why it important?

According to the HIPAA Journal, the average cost of a healthcare data breach is now $9.42 million. In the first half of 2021, 351 significant data breaches were reported, affecting nearly 28 million individuals. The needs have never been more acute among healthcare providers, insurers, biotechnology and health research companies for effective information security and controls. Protecting sensitive data and establishing a firm security posture is essential.  Improving health care and reducing cost relies on structured approaches and thoughtful implementation of available technologies to help govern data and mitigate risk across the enterprise.

Effective and efficient management of governance, risk, and compliance, or GRC, is fast becoming a business priority across industries. Leaders at hospitals and health systems of all sizes are looking for ways to build operating strategies that harmonize and enhance efforts for GRC. Essential to that mission are effective data governance, risk management, regulatory compliance, business continuity management, project governance, and security. But rather than stand-alone or siloed security or compliance efforts, a cohesive program coupled with GRC solutions allow for organizational leaders to address the multitude of challenges more effectively and efficiently.

What are the goals for I.T. GRC?

For GRC efforts, leaders are looking to:

  • Safeguard Protected Healthcare Data
  • Meet and Maintain Compliance to Evolving Regulatory Mandates and Standards
  • Identify, Mitigate and Prevent Risk
  • Reduce operational friction
  • Build in and utilize best practices

Managing governance, risk, and compliance in healthcare enterprises is a daunting task. GRC implementation for healthcare risk managers can be difficult, especially during this time of rapid digital and cloud transformation. But relying on internal legacy methods and tools leads to the same issues that have been seen on-premises, stifling innovation and improvement. As organizations adapt to cloud environments as a key element of digital transformation and integrated health care, leaders are realizing that now is the time to leverage the technology to implement GRC frameworks that accelerate their progress toward positive outcomes. What’s needed is expertise and a clear roadmap to success.

Cloud Automation of GRC

The road to success starts with a framework, aligned to business objectives, that provides cloud automation of Governance, Risk, and Compliance. Breaking this into three distinct phases, ideally this would involve:

  1. Building a Solid Foundation – within the cloud environment, ensuring infrastructure and applications are secured before they are deployed.
  • Image/Operation System hardening automation pipelines.
  • Infrastructure Deployment Automation Pipelines including Policy as Code to meet governance requirements.
  • CI/CD Pipelines including Code Quality and Code Security.
  • Disaster Recovery as a Service (DRaaS) meeting the organization’s Business Continuity Planning requirements.
  • Configuration Management to allow automatic remediation of your applications and operating systems.
  • Cost Management strategies with showback and chargeback implementation.
  • Automatic deployment and enforcement of standard security tools including FIM, IDS/IPS, AV and Malware tooling.
  • IAM integration for authorization and authentication with platforms such as Active Directory, Okta, and PingFederate, allowing for more granular control over users and elevated privileges in the clouds.
  • Reference Architectures created for the majority of the organization’s needs that are pre-approved, security baked-in to be used in the infrastructure pipelines.
  • Self-service CMDB integration with tools such ServiceNow, remedy and Jira ServiceDesk allowing business units to provision their own infrastructure while providing the proper governance guardrails.
  • Resilient Architecture designs
  1. Proper Configuration and MaintenanceInfrastructure misconfiguration is the leading cause of data breaches in the cloud, and a big reason misconfiguration happens is infrastructure configuration “drift,” or change that occurs in a cloud environment post-provisioning. Using automation to monitor and self-remediate the environment will ensure the cloud environment stays in the proper configuration eliminating the largest cause of incidents. Since workloads will live most of their life in this phase, it is important to ensure there isn’t any drift from the original secure deployment. An effective program will need:
  • Cloud Integrity Monitoring using cloud native tooling.
  • Log Management and Monitoring with centralized logging, critical in a well-designed environment.
  • Application Monitoring
  • Infrastructure Monitoring
  • Managed Services including patching to resolve issues.
  • SLAs to address incidents and quickly get them resolved.
  • Cost Management to ensure that budgets are met and there are no runaway costs.
  • Perimeter security utilizing cloud native and 3rd party security appliance and services.
  • Data Classification
  1. Use of Industry Leading Tools – for risk assessment, reporting, verification and remediation. Thwart future problems and provide evidence to stakeholders that the cloud environment is rock solid. Tools and verification components would include:
  • Compliance reporting
  • Risk Registry integration into tools
  • Future attestations (BAAs)
  • Audit evidence generation

Where do you go from here?

Your organization needs to innovate faster and drive value with the confidence of remaining in compliance. You need to get to a proactive state instead of being reactive. Consider an assessment to help you evaluate your organization’s place in the cloud journey and how the disparate forms of data in the organization are collected, controlled, processed, stored, and protected.

Start with an assessment that includes:

  • Identification of security gaps
  • Identification of foundational gaps
  • Remediation plans
  • Managed service provider onboarding plan
  • A Phase Two (Foundational/Remediation) proposal and Statement of Work

About 2nd Watch

2nd Watch is a trusted and proven partner, providing deep skills and advisory to leading organizations for over a decade. We earned a client Net Promoter Score of 85, a good way of telling you that our customers nearly always recommend us to others. We can help your organization with cloud native solutions. We offer skills in the following areas:

  • Developing cloud first strategies
  • Migration of workloads to the cloud
  • Implementing automation for governance and security guardrails
  • Implementing compliance controls and processes
  • Pipelines for data, infrastructure and application deployment
  • Subject matter expertise for FHIR implementations
  • Managed cloud services

Schedule time with an expert now, contact us.

-Tom James, Sr. Marketing Manager, Healthcare


An Introduction to AWS Proton

As a business scales, so does its software and infrastructure. As desired outcomes adapt and become more complex that can quickly cause a lot of overhead and difficulty for platform teams to manage over time and these challenges often limit the benefits of embracing containers and serverless. Shared services offer many advantages in these scenarios by providing a consistent developer experience while also increasing productivity and effectivity of governance and cost management.

Introduced in December 2020 Amazon Web Services announced the general availability of Proton: an application targeted at providing tooling to manage complex environments while bridging infrastructure and deployment for developers. In this blog we will take a closer look into the benefits of the AWS Proton service offering.

What is AWS Proton?

AWS Proton is a fully managed delivery service, targeted at container and serverless workloads, that provides engineering teams the tooling to automate provisioning and deploy applications while enabling them to provide observability and enforce compliance and best practices. With AWS Proton, development teams utilize resources for infrastructure and to deploy their code. This in turn increases developer productivity by allowing them to focus on their code, software delivery, reduce management overhead, and increase release frequency. Teams can use AWS Proton through the AWS Console and the AWS CLI, allowing for teams to get started quickly and automate complicated operations over time.

How does it work?

The AWS Proton framework allows administrators to define versioned templates which standardize infrastructure, enforce guard rails, leverage Infrastructure as Code with CloudFormation, and provide CI/CD with Code Pipeline and Code Build to automate provisioning and deployments. Once service templates are defined, developers can choose a template and use it to deploy their software. As new code is released, the CI/CD pipelines automatically deploys the changes. Additionally, as new template versions are defined, AWS Proton provides a “one-click” interface which allows administrators to roll out infrastructure updates across all the outdated template versions.

When is AWS Proton right for you?

AWS Proton is built for teams looking to centrally manage their cloud resources. The service interface is built for teams to provision deploy and monitor applications. AWS Proton is worth considering if you are using cloud native services like Serverless applications or if you utilize containers in AWS. The benefits continually grow when working with a service-oriented architecture, microservices, or distributed software as it eases release management, reduces lead time, and creates an environment for teams to operate within a set of rules with little to no additional overhead. AWS Proton is also a good option if you are looking to introduce Infrastructure as Code or CI/CD pipelines to new or even existing software as AWS Proton supports linking existing resources.

Getting Started with AWS Proton is easy!

Platform Administrators

Since AWS Proton itself is free and you only pay for the underlying resources, you are only a few steps away from giving it a try! First a member of the platform infrastructure team creates an environment template. An environment defines infrastructure that is foundational to your applications and services including compute networking (VPCs), Code Pipelines, Security, and Monitoring. Environments are defined via CloudFormation templates and use Jinja for parameters rather than the conventional parameters section in standard CloudFormation templates. You can find template parameter examples in the AWS documentation. You can create, view, update, and manage your environment templates and their versions in the AWS Console.

Once an environment template is created the platform administrator would create a service template which defines all resources that are logically relative to a service. For example, if we had a container which performs some ETL this could contain an ECR Repository, ECS Cluster, ECS Service Definition, ECS Task Definition, IAM roles, and the ETL source and target storage.

In another example, we could have an asynchronous lambda which performs some background tasks and its corresponding execution role. You could also consider using schema files for parameter validation! Like environment templates, you can create, view, update, and manage your service templates and their versions in the AWS Console.

Once the templates have been created the platform administrator can publish the templates and provision the environment. Since services also include CI/CD pipelines platform administrators should also configure repository connections by creating the GitHub app connector. This is done in the AWS Developer Tools service or a link can be found on the AWS Proton page in the Console.

Once authorized, the GitHub app is automatically created and integrated with AWS and CI/CD pipelines will automatically detect available connections during service configuration.

 

At this time platform administrators should see a stack which contains the environment’s resources. They can validate each resource, interconnectivity, security, audits, and operational excellence.

Developers

At this point developers can choose which version they will use to deploy their service. Available services can be found in the AWS Console and developers can review the template and requirements before deployment. Once they have selected the target template they choose the repository that contains their service code, the GitHub app connection created by the platform administrator, and any parameters required by the service and CodePipeline.

After some time, developers should be able to see their application stack in CloudFormation, their application’s CodePipeline resources, and the resources for their application accordingly!

In Closing

AWS Proton is a new and exciting service available for those looking to adopt Infrastructure as Code, enable CI/CD pipelines for their products, and enforce compliance, consistent standards, and best practices across their software and infrastructure. Here we explored a simple use case, but real world scenarios likely require a more thorough examination and implementation.

AWS Proton may require a transition for teams that already utilize IaC, CI/CD, or that have created processes to centrally manage their platform infrastructure. 2nd Watch has over 10 years’ experience in helping companies move to the cloud and implement shared services platforms to simplify modern cloud operations. Start a conversation with a solution expert from 2nd Watch today and together we will assess and create a plan built for your goals and targets!

-Isaiah Grant, Cloud Consultant


7 Trends Influencing DevSecOps & DevOps Adoption

Companies worldwide have been increasing DevOps adoption and DevSecOps adoption into their regular workflows at an exponential rate. Whether following Agile methodologies or creating independent workflows stemming from DevOps, companies have been leveraging the faster manufacturing rate with superior quality that DevSecOps provides.

However, the increasing development in autonomous technologies such as AI or ML is idealizing a work cycle where the system operates independently of humans. It aims to provide faster, reliable, and better products – shifting from DevOps to NoOps.

A set of practices coupling software development (Dev) and information technology operations (Ops), DevOps is the combination of employees, methods, and products to allow for perpetual, seamless delivery of quality and value. Adding security to a set of DevOps practices, a DevSecOps approach provides multiple layers of security and reliability by integrating highly secure, robust, and dependable processes and tools into the work cycle and the final product.

This desirable outcome of integrating DevOps and DevSecOps into corporations has made it a trendy work cycle in the market. However, with a growing focus on automation and development in Artificial Intelligence and Machine Learning, we could be heading into a NoOps scenario, where self-learning and self-healing systems govern the work processes.

NoOps is a work cycle wherein the technologies used by a company are so autonomous and intelligent that DevOps and DevSecOps do not need to be exclusively implemented to maintain a continuous outflow of quality and value.

What are the trends that truly influence DevOps and DevSecOps adoptions in countless tech businesses – small and large – all across the globe? Download our 7 Trends Influencing DevOps/DevSecOps Adoption to find out.

-Mir Ali, Field CTO


You’re on AWS. Now What? 5 Strategies to Increase Your Cloud’s Value

Now that you’ve migrated your applications to AWS, how can you take the value of being on the cloud to the next level? To provide guidance on next steps, here are 5 things you should consider to amplify the value of being on AWS.


Ten Years In: Enterprise DevOps Evolves

DevOps has undergone significant changes since the trend began more than a decade ago. No longer limited to a grassroots movement among ‘cowboy’ developers, DevOps has become synonymous with enterprise software releases. In our Voice of the Enterprise: DevOps, Workloads and Key Projects 2020 survey, we found that 90% of companies that had deployed applications to production in the last year had adopted DevOps across some teams (55%) or entirely across the IT organization (40%). Another 9% were in discovery phases or PoC with their DevOps implementation, leaving only a tiny fraction of respondents reporting no adoption of DevOps.

What is DevOps

DevOps is driven by the need for faster releases, more efficient IT operations and flexibility to respond to changes in the market, whether technical such as the advent of cloud-native technologies, or other, such as the Covid-19 pandemic.

Still, one of the biggest drivers of the trend and a primary reason DevOps has become part and parcel of enterprise software development and deployment is adoption from the top-down. IT management and executive leadership are increasingly interested and involved in DevOps deployments, often because it is a critical part of cloud migration, digital transformation and other key initiatives.

Most organizations also report that their DevOps implementation is managed or sanctioned by the organization, in line with the departure from shadowy IT DevOps deployments of 5 or 10 years ago toward approved deployments that meet policy, security and compliance requirements.

Another significant change in DevOps is the growing role of business objectives and outcomes. Organizations are measuring and proving their DevOps success not only using technical metrics such as quality (47%) and application performance (44%), but also business metrics such as customer satisfaction (also 44%), according to our VotE DevOps study.

We also see line-of-business managers among important stakeholders in DevOps beyond developers and IT operators. The increased focus and priority on business also often translates to a different view on DevOps and IT operations in general. While IT administration has traditionally been a budget spending item with a focus on total cost of ownership (TCO), today’s enterprises are increasingly viewing DevOps and IT ops as a competitive advantage that will bring return on investment (ROI).

DevOps Stakeholder Spread

Another significant aspect of DevOps today is the stakeholder spread. Our surveys have consistently highlighted how security, leadership, traditional IT administrators and business/product managers play an increasingly important role in DevOps, in addition to software developers and IT operations teams. As DevOps spreads to more teams and applications within an organization, it is more likely to pull in these and other key stakeholders, including finance or compliance, among others.

We also see additional people and teams, such as those in sales and marketing or human relations, becoming more integral to enterprise DevOps as the trend continues to evolve.

The prominence of security among primary DevOps stakeholders is indicative of the rapidly evolving DevSecOps trend, whereby security elements are integrated into DevOps workflows.

Our data highlights how a growing number of DevOps releases include security elements, with 64% of companies indicating they do include security elements in 2020, compare to 53% in 2019. DevSecOps is being driven mainly by changing attitudes among software developers, who are increasingly less likely to think the security will slow them down and more likely to tie security to quality, which is something they care about.

DevOps Software Security

Software security vendors have also worked to make security tooling such as API firewalls, vulnerability scanning and software composition analysis (SCA) more integrated and automated so they really don’t slow down developers. Finally, the frequency of high-profile security incidents and breaches remind everyone of the need to reduce risk as much as possible.

Another change in DevOps is an increasing awareness and appreciation of not just technology challenges, but also cultural aspects. Our data indicates top cultural challenges of DevOps include overcoming resistance to change, competing/conflicting priorities and resources, promoting communication and demonstrating equity of benefits/costs.

By aligning objectives, priorities and desired outcomes, teams can better address these cultural challenges to succeed and spread their DevOps implementations. This is also where we’ve seen cross-discipline experience – in development, in IT operations, in security, etc. – can be integral to addressing cultural issues.

If you haven’t yet begun your own DevOps Transformation, 2nd Watch takes an interesting approach you can consider. Their DevOps Transformation process begins with a complete assessment and strategy measuring your current software development and operational maturity, using the CALMS model, and developing a strategy for where and how to apply DevOps approaches

Jay Lyman, Senior Research Analyst, Cloud Native and Applied Infrastructure & DevOps at 451 Research, part of S&P Global Market Intelligence