If you moved to the cloud to take advantage of rapid infrastructure deployment and development support, you understand the power of quickly bringing applications to market. Gaining a competitive edge is all about driving customer value fast. Immersing a company in a DevOps transformation is one of the best ways to achieve speed and performance.
In this blog post, we’re building on the insights of Harish Jayakumar, Senior Manager of Application Modernization and Solutions Engineering at Google, and Joey Yore, Manager, and Principal Consultant at 2nd Watch. See how the highest performing teams in the DevOps space are achieving strong availability, agility, and profitability with application development according to key four metrics. Understand the challenges, solutions, and potential outcomes before starting your own DevOps approach to accelerating app development.
Beyond the fact that DevOps combines software development (Dev) and IT operations (Ops), DevOps is pretty hard to define. Harish thinks the lack of a clinical, agreed-upon definition is by design. “I think everyone is still learning how to get better at building and operating software.” With that said, he describes his definition of DevOps as, “your software delivery velocity, and the reliability of it. It’s basically a cultural and organizational moment that aims to increase software reliability and velocity.”
The most important thing to remember about a DevOps transformation and the practices and principles that make it possible is culture. At its core, DevOps is a cultural shift. Without embracing, adopting, and fostering a DevOps culture, none of the intended outcomes are possible.
Within DevOps there are five key principles to keep top of mind:
Reduce organizational silos
Accept failure as the norm
Implement gradual changes
Leverage tooling and automation
Measuring DevOps: DORA and CALMS
Google acquired DevOps Research and Assessment (DORA) in 2018 and relies on the methodology developed from DORA’s annual research to measure DevOps performance. “DORA follows a very strong data-driven approach that helps teams leverage their automation process, cultural changes, and everything around it,” explains Harish. Fundamental to DORA are four key metrics that offer a valid and reliable way to measure the research and analysis of any kind of software delivery performance. These metrics gauge the success of DevOps transformations from ‘low performers’ to ‘elite performers’.
Deployment frequency: How often is the organization successfully released to production
Lead time for changes: The amount of time it takes a commit to get into production
Change failure rate: The percentage of deployments causing a failure in production
Time to restore service: How long it takes to recover from a failure in production
DORA is similar to the CALMS model which addresses the five fundamental elements of DevOps starting with where the enterprise is today and continuing throughout the transformation. CALMS also uses the four key metrics identified by DORA to evaluate DevOps performance and delivery. The acronym stands for:
Culture: Is there a collaborative and customer-centered culture across all functions?
Automation: Is automation being used to remove toil or wasted work?
Lean: Is the team agile and scrappy with a focus on continuous improvement?
Measurement: What, how, and against what benchmarks is data being measured?
Sharing: To what degree are teams teaching, sharing, and contributing to cross-team collaboration?
DevOps Goals: Elite Performance for Meaningful Business Impacts
Based on the metrics above, organizations fall into one of four levels: low, medium, high, or elite performers. The aspiration to achieve elite performance is driven by the significant business impact these teams have on their overall organization. According to Harish, and based on research by the DORA team at Google, “It’s proven that elite performers in the four key metrics are 3.56 times more likely to have a stronger availability practice. There’s a strong correlation between these elite performers and the business impact of the organization that they’re a part of. ”
He goes on to say, “High performers are more agile. We’ve seen 46 times more frequent deployments from them. And it’s more reliable. They are five times more likely to exceed any profitability, market share, or productivity goals on it.” Being able to move quickly enables these organizations to deliver features faster, and thus increase their edge or advantage over competitors.
Focusing on the five key principles of DevOps is critical for going from ideation to implementation at a speed that yields results. High and elite performers are particularly agile with their use of technology. When a new technology is available, DevOps teams need to be able to test, apply, and utilize it quickly. With the right tools, teams are alerted immediately to code breaks and where that code resides. Using continuous testing, the team can patch code before it affects other systems. The results are improved code quality and accelerated, efficient recovery. You can see how each pillar of DevOps – from culture and agility to technology and measurement, feeds into one another to deliver high levels of performance, solid availability, and uninterrupted continuity.
Overcoming Common DevOps Challenges
Because culture is so central to a DevOps transformation, most challenges can be solved through cultural interventions. Like any cultural change, there must first be buy-in and adoption from the top down. Leadership plays a huge role in setting the tone for the cultural shift and continuously supporting an environment that embraces and reinforces the culture at every level. Here are some ways to influence an organization’s cultural transformation for DevOps success.
Build lean teams: Small teams are better enabled to deliver the speed, innovation, and agility necessary to achieve across DevOps metrics.
Enable and encourage transparency: Joey says, “Having those big siloed teams, where there’s a database team, the development team, the ops team – it’s really, anti-DevOps. What you want to start doing is making cross-functional teams to better aid in knocking down those silos to improve deployment metrics.”
Create continuous feedback loops: Among lean, transparent teams there should be a constant feedback loop of information sharing to influence smarter decision making, decrease redundancy, and build on potential business outcomes.
Reexamine accepted protocols: Always be questioning the organizational and structural processes, procedures, and systems that the organization grows used to. For example, how long does it take to deploy one line of change? Do you do it repeatedly? How long does it take to patch and deploy after discovering a security vulnerability? If it’s five days, why is it five days? How can you shorten that time? What technology, automation, or tooling can increase efficiency?
Measure, measure, measure: Utilize DORAs research to establish elite performance benchmarks and realistic upward goals. Organizations should always be identifying barriers to achievement and continuously improving on measurements toward improvement.
Aim for total performance improvements: Organizations often think they need to choose between performance metrics. For example, in order to influence speed, stability may be negatively affected. Harish says, “Elite performers don’t see trade-offs,” and points to best practices like CICD, agile development, and tests, built-in automation, standardized platform and processes, and automated environment provisioning for comprehensive DevOps wins.
Work small: Joey says, “In order to move faster, be more agile, and accelerate deployment, you’re naturally going to be working with smaller pieces with more automated testing. Whenever you’re making changes on these smaller pieces, you’re actually lowering your risk for anyone’s deployment to cause some sort of catastrophic failure. And if there is a failure, it’s easy to recover. Minimizing risk per change is a very important component of DevOps.”
Both Harish and Joey agree that the best approach to starting your own DevOps transformation is one based on DevOps – start small. The first step is to compile a small team to work on a small project as an experiment. Not only will it help you understand the organization’s current state, but it helps minimize risk to the organization as a whole. Step two is to identify what your organization and your DevOps team are missing. Whether it’s technology and tooling or internal expertise, you need to know what you don’t know to avoid regularly running into the same issues.
Finally, you need to build those missing pieces to set the organization up for success. Utilize training and available technology to fill in the blanks, and partner with a trusted DevOps expert who can guide you toward continuous optimization.
2nd Watch provides Application Modernization and DevOps Services to customize digital transformations. Start with our free online assessment to see how your application modernization maturity compares to other enterprises. Then let 2nd Watch complete a DevOps Transformation Assessment to help develop a strategy for the application and implementation of DevOps practices. The assessment includes analysis using the CALMS model, identification of software development and level of DevOps maturity, and delivering tools and processes for developing and embracing DevOps strategies.
DevOps is defined as a set of practices that combines software development (Dev) and information-technology operations (Ops) to shorten the systems development life cycle and provide continuous delivery with high software quality. (DevOps, n.d.) The idea of tools, or tooling, is not mentioned, likely due to misconceptions or their sheer number.
Indeed, there are a lot of DevOps tools that span across a multitude of categories. From application performance monitoring tools to configuration management tools to deployment tools to continuous integration and continuous delivery tools and more, it’s no wonder IT organizations are hesitant to learn new ones.
Misconceptions about DevOps tooling are a common problem in software development and are essential to highlight. When 2nd Watch guides organizations along their DevOps transformation journey, common misconceptions encountered include:
“We already have a tool that does that.”
“We have too many tools already.”
“We need to standardize.”
“Our teams aren’t ready for that.”
However, these statements contradict what DevOps represents. Although DevOps practices are not about running a particular set of tools, tooling is undoubtedly an essential component that helps DevOps teams operate and evolve applications quickly and reliably. Tools also help engineers independently accomplish tasks that generally require help from other groups, further increasing a team’s velocity.
Crucial DevOps Practices
Several practices are crucial to a successful DevOps transformation and counter the objections mentioned above.
Humans learn by making mistakes and adjusting their behavior accordingly to improve continuously. IT practitioners do that day-in and day-out when they write code, change a configuration setting, or look at a new dashboard metric. All the above statements work against this idea, hindering experimentation. A new tool may open new avenues of test automation or provide better tracking mechanisms. An experimentation mindset is crucial to the development process for developers to get better at it and improve their understanding of when, where, and how something will or will not fit into their workflow.
“Shifting left” is a common DevOps practice and is important to understand. Many of the above statements stifle this idea due to siloed, gated workflows. For example, a product’s development workflow starts on the left, then flows along different stages and gates toward the right, eventually making it into production. By moving feedback as far left as possible, implementers are alerted to problems faster and quickly remedy the issues.
Servant leadership is critical in handling a changing culture. It is another aspect of shifting left where, rather than “managing” specifics and giving orders, servant leaders encourage and guide decisions toward a common goal. Ultimately, the decision-making process is moving to the left.
One tool to rule them all and, with automation, bind them!
Described here is a specific tool that helps developers be more efficient in their software delivery which, in turn, drives organizations to embrace the DevOps culture. It is a unified tool that provides a single interface that everyone can understand and to which they can contribute.
“GNU Make is a tool which controls the generation of executables and other non-source files of a program from the program’s source files.” – gnu.org
GNU Make generates artifacts from source files by running local commands. It’s prevalent in the UNIX-like (*nix) world, and for a good reason. In the past, developers primarily used GNU Make to build Linux source files and compile C and C++. However, its absolute dominance in that realm made it readily available in nearly every *nix flavor and with minimal variance. It’s a tried-and-true tool that’s relatively simple to use and available everywhere.
There are two things needed to use GNU Make.
the GNU Make tool
Because installation of the tool is outside of the scope of this article, that step will be skipped. However, if a *nix system is used, Make is likely already installed.
Rather than using the GNU Make tool to create an executable, make is strictly used here to set up automation, significantly simplifying the code. Terraform is the tool used to demonstrate how GNU Make works.
The Makefile goes directly into the root of the project…
…and looks like this:
The GNU Make tool will look inside the current working directory of the “Makefile” and use that as its set of instructions. Although the tool will look for Makefile or makefile, it is a firmly accepted standard practice to use the capitalized form.
The Makefile defines a set of rules, targets, and dependencies. Since the Makefile is a wrapper, the targets are omitted and listed as .PHONY, telling make that a file for the rules we will not be output.
In this example, make is being told that it is not building anything with the rule (.PHONY: init), is defining the rule (init:), and which commands make is to execute when the init rule is executed (terraform init).
The following command is run in the root of the project to execute the rule:
The plan rule
Once the project is initialized, Terraform displays the changes it wants to make to the infrastructure before implementation to verify expectations. (NOTE: terraform plan prints out a description of what Terraform is going to do, but does not make changes)
.PHONY is used again because, like before, nothing is being created. Here, the init rule is set up as a dependency for the plan rule. In this case, when ]% make plan is executed, the init rule will run first. Any number of these rules can be strung together, helping to make them more atomic. (NOTE: it is not necessary to run init before plan with Terraform.)
With a little more complexity, make can codify important steps in an operation. In the above, an environment variable is required (ENV), then the appropriate workspace is selected before running the plan rule with the appropriate variables set for that environment. (-var-file).
3. The test rule
Create a simple test rule for the Terraform project:
Here, the code is formatted to canonicalize it, and then the validate Terraform command is run to ensure the templates are valid.
The idea behind self-documenting code is to have code written in a way that’s explanatory to its purpose. Variable names, function names, classes, objects, and methods should all be named appropriately. Volumes of documentation are not needed to describe what a piece of code is doing. As it’s code, the underlying understanding is that it’s executable, so what is read in the code is what will happen when it is executed.
In the above examples, executable documentation for the project was also created by using make as the central “documentation” point. Even when things got more complex with the second plan example, any editor can be used to see the definition for the plan rule.
In cases where Terraform isn’t used often, it isn’t necessary to remember every command, variable, or command-line argument needed to be productive. Which commands are run, and where, can still be seen. And, if more details are required, execute the command in a terminal and read its help information.
The Adapter Design Pattern
In software engineering, the adapter design pattern is a well-known use pattern to allow different inputs into a standard interface. The concept may be better understood through the following analogy.
There are many different types of outlets used throughout the world. Europe uses a round, 2-pronged plug. The United States (US) uses a 2-pronged flat or 3-pronged (2 flat, one round) outlet. One solution for US plugs to work in European outlets is to jigger the outlet itself, allowing for both US and European plugs. However, this “solution” is costly. Instead, an easier and less expensive solution is to use an adapter that understands the inputs of the US plug and converts it to the European inputs that we can then plug into the wall.
This is seen when documentation for command-line tools is accessed using the help argument. Nearly every command-line tool understands the –help switch or help command. It’s ubiquitous.
The earlier examples (init, plan) are more specific to Terraform, but the test rule is not. The test rule is an excellent example of being able to use make as an adapter!
Development testing is a critical aspect of all engineering, including software delivery. Writing tests ensure that any changes introduced into a project don’t cause any unintended problems or bugs.
Every language and many tools have individual testing frameworks—multiple frameworks per language, many times with different commands for each. By using make as the user interface to the project, workflows can stay the same—or very similar—across multiple projects utilizing multiple tools. For example, make test can be run in any project, and the tests for that project will execute, regardless of the language. Python? Golang? .NET? Ruby? make test can be used for each one in every project.
Another example is the init rule in the above Makefile. There’s typically some setup or configuration for every project that needs to happen to get an environment ready for use. The init rule can be used to set up any project, whether for running pip for python or npm for Node.
The fear of tooling burnout is not apocalyptic. By utilizing a wrapper tool like GNU Make to provide a standard interface, many problems that might be encountered when implementing a breadth of tools can be mitigated. However, cultural changes are essential, and tools will not be able to solve them. But a piece of tech like GNU Make can be used to alleviate the perceived notions of “Too many tools.”
DevOps is a set of practices that improve the efficiency and effectiveness of IT operations. Utilizing many aspects of agile methodology, DevOps aims to shorten the systems development life cycle and provide continuous improvement. As you consider incorporating DevOps into your operations, understand the effect DevOps has on processes and culture. Successful implementation is about finding the right balance of attention on people, processes, and technology to achieve improvement.
The ultimate goal is continuous improvement through processes and tools. No amount of tooling, automation, or fancy buzz words can cause any greater effect on an organization than transforming their culture, and there’s no other way to do that than to focus on the change.
Understanding What You Are Trying to Accomplish with DevOps
Ask yourself what you are trying to achieve. It may seem obvious, but you may get on the wrong track without thinking about what you want your development and operations teams to achieve.
Often, when clients approach 2nd Watch wanting to incorporate DevOps, they are really asking for automation tools and nothing else. While automation has certain benefits, DevOps goes beyond the benefits of technology to improve processes, help manage change more effectively, and improve organizational culture. Change is difficult. However, implementing a cultural shift is particularly challenging. Often overlooked, cultural change is the greatest pain 2nd Watch consultants encounter when working with companies trying to make substantial organizational changes. Even implementing things as simple as sharing responsibility, configuration management, or version control can cause turmoil!
From IT Management to Leadership
There is a distinction between what it means to be a manager versus being a leader. And, in all industries, being a manager does not necessitate being a good leader.
It’s helpful to consider the progression of those in technical roles to management. Developers and operations personnel are typically promoted to managers because they are competent in their technical position—they excel at their current software development process, configuring a host or operating a Kubernetes cluster. However, as a manager, they’re also tasked with directing staff, which may put them outside of their comfort zone. They are also responsible for pay, time and attendance, morale, and hiring and firing. They likely were not promoted for their people skills but their technical competencies.
Many enterprise organizations make the mistake of believing employees who have outstanding technical skills will naturally excel at people management once they get that promotion. Unfortunately, this mistake breeds many managers who fall short of potential, often negatively affecting corporate culture.
Leading the Change
It’s imperative to understand the critical role leadership plays in navigating the amount of change that will likely occur and in changing the organization’s culture.
Whether you’re a manager or leader matters a lot when you answer the question, “What do I really want out of DevOps?” with, “I want to be able to handle change. Lots and lots of change.”
Better responses would include:
“I want our organization to be more agile.”
“I want to be able to react faster to the changing market.”
“I want to become a learning organization.”
“I want to embrace a DevOps culture for continuous improvement.”
The underlying current of these answers is change.
Unfortunately, when bungled management occurs, it’s the people below that pay the price. Those implementing the changes tend to take the brunt of the worst of the change pain. Not only does this cause lower morale, but it can cause a mutiny of sorts. Apathy can affect quality, causing outages. The best employees may jump ship for greener pastures. Managers may give up on culture change entirely and go back to the old ways.
However, there is light at the end of the tunnel. With a bit of effort and determination, you can learn to lead change just as you learned technical skills.
Go to well-known sources on management improvement and change management. Leading Change by John P. Kotter details the successful implementation of change into an organization. Kotter discusses eight steps necessary to help improve your chances of being successful in changing an organization’s culture:
Establishing a sense of urgency
Creating the guiding coalition
Developing a vision and strategy
Communicating the change vision
Empowering broad-based action
Generating short term wins
Consolidating gains and producing more change
Anchoring new approaches in the culture
It’s all about people. Leaders want to empower their teams to make intelligent, well-informed decisions that align with their organization’s goals. Fear of making mistakes should not impede change.
Mistakes happen. Instead of managers locking their teams down and passing workflows through change boards, leaders can embrace the DevOps movement and foster a culture where their high-performing DevOps team can make mistakes and quickly remedy and learn from them.
Each step codifies what most organizations are missing when they start a transformation: focusing on the change and moving from a manager to a leader.
The 5 Levels of Leadership
Learning the skills necessary to become a great leader is not often discussed when talking about leadership or management positions. We are accustomed to many layers of management and managers sticking to the status quo in the IT industry. But change is necessary, and the best place to start is with ourselves.
Level 1 – Position: People follow you only because they believe they have to.
Level 2 – Permission: People follow you because they want to.
Level 3 – Production: People follow you because of what you have done for the organization.
Level 4 – People Development: People follow you because of what you have done for them.
Level 5 – Pinnacle: People follow because of who you are and what you represent.
Leadership easily fits into these levels, and determining your position on the ladder can help. Not only are these levels applicable to individuals but, since an organization’s culture can revolve around how good or bad their leadership is, this ends up being a mirror into the problems the organization faces altogether.
When transforming to a DevOps culture, it’s essential to understand ways to become a better leader. In turn, making improvements as a leader will help foster a healthy environment in which change can occur. And there’s no better catalyst to becoming a great leader than being able to focus on the change.
2nd Watch collaborates with many different companies just beginning their workflow modernization journey. Contact us to discuss how we can help your organization further adopt a DevOps culture.
As a result of the increase in cloud adoption across all industries, understanding practices and tools that help organizations’ software run efficiently is essential to how their cloud environment and organization operate. However, many companies do not have the knowledge or expertise needed for success. In fact, Puppet’s 2021 State of DevOps Report found that while 2 in 3 respondents report using the public cloud, only 1 in 4 use the cloud to its full potential.
Enter the DevOps movement
The concept of DevOps combines development and operations to encourage collaboration, embrace automation, and speed up the deployment process. Historically, development and operations teams worked independently, leading to inefficiencies and inconsistencies in objectives and department leadership. DevOps is the movement to eliminate these roadblocks and bring the two communities together to transform how their software operates.
According to a 2020 Atlassian survey, 99% of developers & IT decision-makers say DevOps has positively impacted their organization. Benefits include helping advance their career, and better and faster deliverables. Given the favorable outcome for these developers and IT decision-makers, adopting DevOps tools and practices is a no-brainer. But here are three more advantages to embracing the DevOps movement:
Practices like microservices and continuous delivery allow your business operations to move faster, as your operations and development teams can innovate for customers more quickly, adapt to changing markets, and efficiently drive business results. Additionally, continuous integration and continuous delivery (CI/CD) automate the software release process for fast and continuous software delivery. A quick release process will allow you to release new features, fix bugs, respond to your customers’ needs, and ultimately, provide your organization with a competitive advantage.
While DevOps focuses on speed and agile software development, security is still of high priority in a DevOps environment. Tools such as automated compliance policies, fine-grained controls, and configuration management techniques will help you reap the speed and efficiencies provided by DevOps while maintaining control and compliance of your environment.
3. Improved Collaboration
DevOps is more than just technical practices and tools. A complete DevOps transformation involves adopting cultural values and organizational practices that increase collaboration and improve company culture. The DevOps cultural model emphasizes values like ownership and accountability, which work together to improve company culture. As development and operations teams work closely together, their collaboration reduces inefficiencies in their workflows. Additionally, collaboration entails succinctly communicating roles, plans, and goals. The State of DevOps Report also found that clarity of purpose, mission and operating context seem to be strongly associated with highly evolved organizations.
In short, teams who adopt DevOps practices can improve and streamline their deployment pipeline.
What is a DevOps Pipeline?
The term “DevOps Pipeline” is used to describe the set of automated processes and tools that allow developer and operations teams to implement, test, and deploy code to a production environment in a structured and organized manner.
A DevOps pipeline may look different or vary from company to company, but there are typically eight phases: plan, code, build, test, release, deploy, operate, and monitor. When developing a new application, a DevOps pipeline ensures that the code runs smoothly. Once written, various tests are run on the code to flush out potential bugs, mistakes, or any other possible errors. After building the code and running the tests for proper performance, the code is ready for deployment to external users.
A significant characteristic of a DevOps pipeline is it is continuous, meaning each function occurs on an ongoing basis. The most vital one, which was mentioned earlier, is CI/CD. CI, or continuous integration, is the practice of automatically and continuously building and testing any changes submitted to an application. CD, or continuous delivery, extends CI by using automation to release software frequently and predictably with the click of a button. CD allows developers to perform a more comprehensive assessment of updates to confirm there are no issues.
Other “continuous” DevOps practices include:
Continuous deployment: This practice goes beyond continuous delivery (CD). It is an entirely automated process that requires no human intervention, eliminating the need for a “release day.”
Continuous feedback: Applying input from customers and stakeholders, and systematic testing and monitoring code in the pipeline, allows developers to implement changes faster, leading to greater customer satisfaction.
Continuous testing: A fundamental enabler of continuous feedback. Performing automated tests on the code throughout the pipeline leads to faster releases and a higher quality product.
Continuous monitoring: Another component of continuous feedback. Use this practice to continuously assess the health and performance of your applications and identify any issues.
Continuous operations: Use this practice to minimize or eliminate downtime for your end users through efficiently managing hardware and software changes.
From a comprehensive assessment that measures your current software development and operational maturity to developing a strategy for where and how to apply different DevOps approaches to ongoing management and support, we will be with you every step of the way. Following is what a typical DevOps transformation engagement with us looks like:
Phase 0: Basic DevOps Review
DevOps and assessment overview delivered by our Solutions Architects
Phase 1: Assessment & Strategy
Initial 2-4 week engagement to measure your current software development and operational maturity
Develop a strategy for where and how to apply DevOps approaches
1-2 week onboarding to 2nd Watch Managed DevOps service and integration of your operations team and tools with ours
Phase 4: Managed DevOps
Ongoing managed service, including monitoring, security, backups, and patching
Ongoing guidance and coaching to help you continuously improve and increase the use of tooling within your DevOps teams
Getting Started with DevOps
While companies may understand the business benefits derived from DevOps, 2nd Watch has the knowledge and expertise to help accelerate their digital transformation journey. 2nd Watch is a Docker Authorized Consulting Partner and has earned the AWS DevOps Competency for technical proficiency, leadership, and proven success in helping customers adopt the latest DevOps principles and technologies. Contact us today to get started.