DevOps is defined as a set of practices that combines software development (Dev) and information-technology operations (Ops) to shorten the systems development life cycle and provide continuous delivery with high software quality. (DevOps, n.d.) The idea of tools, or tooling, is not mentioned, likely due to misconceptions or their sheer number.
Indeed, there are a lot of DevOps tools that span across a multitude of categories. From application performance monitoring tools to configuration management tools to deployment tools to continuous integration and continuous delivery tools and more, it’s no wonder IT organizations are hesitant to learn new ones.
Misconceptions about DevOps tooling are a common problem in software development and are essential to highlight. When 2nd Watch guides organizations along their DevOps transformation journey, common misconceptions encountered include:
- “We already have a tool that does that.”
- “We have too many tools already.”
- “We need to standardize.”
- “Our teams aren’t ready for that.”
However, these statements contradict what DevOps represents. Although DevOps practices are not about running a particular set of tools, tooling is undoubtedly an essential component that helps DevOps teams operate and evolve applications quickly and reliably. Tools also help engineers independently accomplish tasks that generally require help from other groups, further increasing a team’s velocity.
Crucial DevOps Practices
Several practices are crucial to a successful DevOps transformation and counter the objections mentioned above.
Humans learn by making mistakes and adjusting their behavior accordingly to improve continuously. IT practitioners do that day-in and day-out when they write code, change a configuration setting, or look at a new dashboard metric. All the above statements work against this idea, hindering experimentation. A new tool may open new avenues of test automation or provide better tracking mechanisms. An experimentation mindset is crucial to the development process for developers to get better at it and improve their understanding of when, where, and how something will or will not fit into their workflow.
“Shifting left” is a common DevOps practice and is important to understand. Many of the above statements stifle this idea due to siloed, gated workflows. For example, a product’s development workflow starts on the left, then flows along different stages and gates toward the right, eventually making it into production. By moving feedback as far left as possible, implementers are alerted to problems faster and quickly remedy the issues.
Servant leadership is critical in handling a changing culture. It is another aspect of shifting left where, rather than “managing” specifics and giving orders, servant leaders encourage and guide decisions toward a common goal. Ultimately, the decision-making process is moving to the left.
One tool to rule them all and, with automation, bind them!
Described here is a specific tool that helps developers be more efficient in their software delivery which, in turn, drives organizations to embrace the DevOps culture. It is a unified tool that provides a single interface that everyone can understand and to which they can contribute.
GNU Make generates artifacts from source files by running local commands. It’s prevalent in the UNIX-like (*nix) world, and for a good reason. In the past, developers primarily used GNU Make to build Linux source files and compile C and C++. However, its absolute dominance in that realm made it readily available in nearly every *nix flavor and with minimal variance. It’s a tried-and-true tool that’s relatively simple to use and available everywhere.
There are two things needed to use GNU Make.
- the GNU Make tool
- a Makefile
Because installation of the tool is outside of the scope of this article, that step will be skipped. However, if a *nix system is used, Make is likely already installed.
Rather than using the GNU Make tool to create an executable, make is strictly used here to set up automation, significantly simplifying the code. Terraform is the tool used to demonstrate how GNU Make works.
The Makefile goes directly into the root of the project…
…and looks like this:
The GNU Make tool will look inside the current working directory of the “Makefile” and use that as its set of instructions. Although the tool will look for Makefile or makefile, it is a firmly accepted standard practice to use the capitalized form.
The Makefile defines a set of rules, targets, and dependencies. Since the Makefile is a wrapper, the targets are omitted and listed as .PHONY, telling make that a file for the rules we will not be output.
- The initrule:
In this example, make is being told that it is not building anything with the rule (.PHONY: init), is defining the rule (init:), and which commands make is to execute when the init rule is executed (terraform init).
The following command is run in the root of the project to execute the rule:
- The plan rule
Once the project is initialized, Terraform displays the changes it wants to make to the infrastructure before implementation to verify expectations. (NOTE: terraform plan prints out a description of what Terraform is going to do, but does not make changes)
.PHONY is used again because, like before, nothing is being created. Here, the init rule is set up as a dependency for the plan rule. In this case, when ]% make plan is executed, the init rule will run first. Any number of these rules can be strung together, helping to make them more atomic. (NOTE: it is not necessary to run init before plan with Terraform.)
With a little more complexity, make can codify important steps in an operation. In the above, an environment variable is required (ENV), then the appropriate workspace is selected before running the plan rule with the appropriate variables set for that environment. (-var-file).
3. The test rule
Create a simple test rule for the Terraform project:
Here, the code is formatted to canonicalize it, and then the validate Terraform command is run to ensure the templates are valid.
Concepts of Unifying Tools
The idea behind self-documenting code is to have code written in a way that’s explanatory to its purpose. Variable names, function names, classes, objects, and methods should all be named appropriately. Volumes of documentation are not needed to describe what a piece of code is doing. As it’s code, the underlying understanding is that it’s executable, so what is read in the code is what will happen when it is executed.
In the above examples, executable documentation for the project was also created by using make as the central “documentation” point. Even when things got more complex with the second plan example, any editor can be used to see the definition for the plan rule.
In cases where Terraform isn’t used often, it isn’t necessary to remember every command, variable, or command-line argument needed to be productive. Which commands are run, and where, can still be seen. And, if more details are required, execute the command in a terminal and read its help information.
The Adapter Design Pattern
In software engineering, the adapter design pattern is a well-known use pattern to allow different inputs into a standard interface. The concept may be better understood through the following analogy.
There are many different types of outlets used throughout the world. Europe uses a round, 2-pronged plug. The United States (US) uses a 2-pronged flat or 3-pronged (2 flat, one round) outlet. One solution for US plugs to work in European outlets is to jigger the outlet itself, allowing for both US and European plugs. However, this “solution” is costly. Instead, an easier and less expensive solution is to use an adapter that understands the inputs of the US plug and converts it to the European inputs that we can then plug into the wall.
This is seen when documentation for command-line tools is accessed using the help argument. Nearly every command-line tool understands the –help switch or help command. It’s ubiquitous.
The earlier examples (init, plan) are more specific to Terraform, but the test rule is not. The test rule is an excellent example of being able to use make as an adapter!
Development testing is a critical aspect of all engineering, including software delivery. Writing tests ensure that any changes introduced into a project don’t cause any unintended problems or bugs.
Every language and many tools have individual testing frameworks—multiple frameworks per language, many times with different commands for each. By using make as the user interface to the project, workflows can stay the same—or very similar—across multiple projects utilizing multiple tools. For example, make test can be run in any project, and the tests for that project will execute, regardless of the language. Python? Golang? .NET? Ruby? make test can be used for each one in every project.
Another example is the init rule in the above Makefile. There’s typically some setup or configuration for every project that needs to happen to get an environment ready for use. The init rule can be used to set up any project, whether for running pip for python or npm for Node.
The fear of tooling burnout is not apocalyptic. By utilizing a wrapper tool like GNU Make to provide a standard interface, many problems that might be encountered when implementing a breadth of tools can be mitigated. However, cultural changes are essential, and tools will not be able to solve them. But a piece of tech like GNU Make can be used to alleviate the perceived notions of “Too many tools.”
Contact 2nd Watch to discuss how we can help further your DevOps transformation.
-by Craig Monson, Principal Cloud Consultant, 2nd Watch