3 Advantages to Embracing the DevOps Movement (Plus Bonus Pipeline Info!)

What is DevOps?

As a result of the increase in cloud adoption across all industries, understanding practices and tools that help organizations’ software run efficiently is essential to how their cloud environment and organization operate. However, many companies do not have the knowledge or expertise needed for success. In fact, Puppet’s 2021 State of DevOps Report found that while 2 in 3 respondents report using the public cloud, only 1 in 4 use the cloud to its full potential.

Enter the DevOps movement

What is DevOps

The concept of DevOps combines development and operations to encourage collaboration, embrace automation, and speed up the deployment process. Historically, development and operations teams worked independently, leading to inefficiencies and inconsistencies in objectives and department leadership. DevOps is the movement to eliminate these roadblocks and bring the two communities together to transform how their software operates.

According to a 2020 Atlassian survey, 99% of developers & IT decision-makers say DevOps has positively impacted their organization. Benefits include helping advance their career, and better and faster deliverables. Given the favorable outcome for these developers and IT decision-makers, adopting DevOps tools and practices is a no-brainer. But here are three more advantages to embracing the DevOps movement:

1. Speed

Practices like microservices and continuous delivery allow your business operations to move faster, as your operations and development teams can innovate for customers more quickly, adapt to changing markets, and efficiently drive business results. Additionally, continuous integration and continuous delivery (CI/CD) automate the software release process for fast and continuous software delivery. A quick release process will allow you to release new features, fix bugs, respond to your customers’ needs, and ultimately, provide your organization with a competitive advantage.

2. Security

While DevOps focuses on speed and agile software development, security is still of high priority in a DevOps environment. Tools such as automated compliance policies, fine-grained controls, and configuration management techniques will help you reap the speed and efficiencies provided by DevOps while maintaining control and compliance of your environment.

3. Improved Collaboration

DevOps is more than just technical practices and tools. A complete DevOps transformation involves adopting cultural values and organizational practices that increase collaboration and improve company culture. The DevOps cultural model emphasizes values like ownership and accountability, which work together to improve company culture. As development and operations teams work closely together, their collaboration reduces inefficiencies in their workflows. Additionally, collaboration entails succinctly communicating roles, plans, and goals. The State of DevOps Report also found that clarity of purpose, mission and operating context seem to be strongly associated with highly evolved organizations.

In short, teams who adopt DevOps practices can improve and streamline their deployment pipeline.

What is a DevOps Pipeline?

What is a DevOps pipeline

The term “DevOps Pipeline” is used to describe the set of automated processes and tools that allow developer and operations teams to implement, test, and deploy code to a production environment in a structured and organized manner.

A DevOps pipeline may look different or vary from company to company, but there are typically eight phases: plan, code, build, test, release, deploy, operate, and monitor. When developing a new application, a DevOps pipeline ensures that the code runs smoothly. Once written, various tests are run on the code to flush out potential bugs, mistakes, or any other possible errors. After building the code and running the tests for proper performance, the code is ready for deployment to external users.

A significant characteristic of a DevOps pipeline is it is continuous, meaning each function occurs on an ongoing basis. The most vital one, which was mentioned earlier, is CI/CD. CI, or continuous integration, is the practice of automatically and continuously building and testing any changes submitted to an application. CD, or continuous delivery, extends CI by using automation to release software frequently and predictably with the click of a button. CD allows developers to perform a more comprehensive assessment of updates to confirm there are no issues.

Other “continuous” DevOps practices include:

  • Continuous deployment: This practice goes beyond continuous delivery (CD). It is an entirely automated process that requires no human intervention, eliminating the need for a “release day.”
  • Continuous feedback: Applying input from customers and stakeholders, and systematic testing and monitoring code in the pipeline, allows developers to implement changes faster, leading to greater customer satisfaction.
  • Continuous testing: A fundamental enabler of continuous feedback. Performing automated tests on the code throughout the pipeline leads to faster releases and a higher quality product.
  • Continuous monitoring: Another component of continuous feedback. Use this practice to continuously assess the health and performance of your applications and identify any issues.
  • Continuous operations: Use this practice to minimize or eliminate downtime for your end users through efficiently managing hardware and software changes.

 Embrace the DevOps Culture

We understand that change is not always easy. However, through our Application Modernization & DevOps Transformation process, 2nd Watch can help you embrace and achieve a DevOps culture.

From a comprehensive assessment that measures your current software development and operational maturity to developing a strategy for where and how to apply different DevOps approaches to ongoing management and support, we will be with you every step of the way. Following is what a typical DevOps transformation engagement with us looks like:

Phase 0: Basic DevOps Review

  • DevOps and assessment overview delivered by our Solutions Architects

Phase 1: Assessment & Strategy

  • Initial 2-4 week engagement to measure your current software development and operational maturity
  • Develop a strategy for where and how to apply DevOps approaches

Phase 2: Implementation

Phase 3: Onboarding to Managed Services

  • 1-2 week onboarding to 2nd Watch Managed DevOps service and integration of your operations team and tools with ours

Phase 4: Managed DevOps

  • Ongoing managed service, including monitoring, security, backups, and patching
  • Ongoing guidance and coaching to help you continuously improve and increase the use of tooling within your DevOps teams

Getting Started with DevOps

While companies may understand the business benefits derived from DevOps, 2nd Watch has the knowledge and expertise to help accelerate their digital transformation journey. 2nd Watch is a Docker Authorized Consulting Partner and has earned the AWS DevOps Competency for technical proficiency, leadership, and proven success in helping customers adopt the latest DevOps principles and technologies. Contact us today to get started.

-Tessa Foley, Marketing

 

rss
Facebooktwitterlinkedinmail

Cloud Automation for I.T. Governance, Risk, and Compliance (GRC) in Healthcare

It has been said that the “hero of a successful digital transformation is GRC.” The ISACA website states, “to successfully manage the risk in digital transformation you need a modern approach to governance, risk and regulatory compliance.” For GRC program development, it is important to understand the health information technology resources and tools available to enable long term success.

Cloud Automation for I.T. Governance, Risk, and Compliance (GRC) in Healthcare

What is GRC and why it important?

According to the HIPAA Journal, the average cost of a healthcare data breach is now $9.42 million. In the first half of 2021, 351 significant data breaches were reported, affecting nearly 28 million individuals. The needs have never been more acute among healthcare providers, insurers, biotechnology and health research companies for effective information security and controls. Protecting sensitive data and establishing a firm security posture is essential.  Improving health care and reducing cost relies on structured approaches and thoughtful implementation of available technologies to help govern data and mitigate risk across the enterprise.

Effective and efficient management of governance, risk, and compliance, or GRC, is fast becoming a business priority across industries. Leaders at hospitals and health systems of all sizes are looking for ways to build operating strategies that harmonize and enhance efforts for GRC. Essential to that mission are effective data governance, risk management, regulatory compliance, business continuity management, project governance, and security. But rather than stand-alone or siloed security or compliance efforts, a cohesive program coupled with GRC solutions allow for organizational leaders to address the multitude of challenges more effectively and efficiently.

What are the goals for I.T. GRC?

For GRC efforts, leaders are looking to:

  • Safeguard Protected Healthcare Data
  • Meet and Maintain Compliance to Evolving Regulatory Mandates and Standards
  • Identify, Mitigate and Prevent Risk
  • Reduce operational friction
  • Build in and utilize best practices

Managing governance, risk, and compliance in healthcare enterprises is a daunting task. GRC implementation for healthcare risk managers can be difficult, especially during this time of rapid digital and cloud transformation. But relying on internal legacy methods and tools leads to the same issues that have been seen on-premises, stifling innovation and improvement. As organizations adapt to cloud environments as a key element of digital transformation and integrated health care, leaders are realizing that now is the time to leverage the technology to implement GRC frameworks that accelerate their progress toward positive outcomes. What’s needed is expertise and a clear roadmap to success.

Cloud Automation of GRC

The road to success starts with a framework, aligned to business objectives, that provides cloud automation of Governance, Risk, and Compliance. Breaking this into three distinct phases, ideally this would involve:

  1. Building a Solid Foundation – within the cloud environment, ensuring infrastructure and applications are secured before they are deployed.
  • Image/Operation System hardening automation pipelines.
  • Infrastructure Deployment Automation Pipelines including Policy as Code to meet governance requirements.
  • CI/CD Pipelines including Code Quality and Code Security.
  • Disaster Recovery as a Service (DRaaS) meeting the organization’s Business Continuity Planning requirements.
  • Configuration Management to allow automatic remediation of your applications and operating systems.
  • Cost Management strategies with showback and chargeback implementation.
  • Automatic deployment and enforcement of standard security tools including FIM, IDS/IPS, AV and Malware tooling.
  • IAM integration for authorization and authentication with platforms such as Active Directory, Okta, and PingFederate, allowing for more granular control over users and elevated privileges in the clouds.
  • Reference Architectures created for the majority of the organization’s needs that are pre-approved, security baked-in to be used in the infrastructure pipelines.
  • Self-service CMDB integration with tools such ServiceNow, remedy and Jira ServiceDesk allowing business units to provision their own infrastructure while providing the proper governance guardrails.
  • Resilient Architecture designs
  1. Proper Configuration and MaintenanceInfrastructure misconfiguration is the leading cause of data breaches in the cloud, and a big reason misconfiguration happens is infrastructure configuration “drift,” or change that occurs in a cloud environment post-provisioning. Using automation to monitor and self-remediate the environment will ensure the cloud environment stays in the proper configuration eliminating the largest cause of incidents. Since workloads will live most of their life in this phase, it is important to ensure there isn’t any drift from the original secure deployment. An effective program will need:
  • Cloud Integrity Monitoring using cloud native tooling.
  • Log Management and Monitoring with centralized logging, critical in a well-designed environment.
  • Application Monitoring
  • Infrastructure Monitoring
  • Managed Services including patching to resolve issues.
  • SLAs to address incidents and quickly get them resolved.
  • Cost Management to ensure that budgets are met and there are no runaway costs.
  • Perimeter security utilizing cloud native and 3rd party security appliance and services.
  • Data Classification
  1. Use of Industry Leading Tools – for risk assessment, reporting, verification and remediation. Thwart future problems and provide evidence to stakeholders that the cloud environment is rock solid. Tools and verification components would include:
  • Compliance reporting
  • Risk Registry integration into tools
  • Future attestations (BAAs)
  • Audit evidence generation

Where do you go from here?

Your organization needs to innovate faster and drive value with the confidence of remaining in compliance. You need to get to a proactive state instead of being reactive. Consider an assessment to help you evaluate your organization’s place in the cloud journey and how the disparate forms of data in the organization are collected, controlled, processed, stored, and protected.

Start with an assessment that includes:

  • Identification of security gaps
  • Identification of foundational gaps
  • Remediation plans
  • Managed service provider onboarding plan
  • A Phase Two (Foundational/Remediation) proposal and Statement of Work

About 2nd Watch

2nd Watch is a trusted and proven partner, providing deep skills and advisory to leading organizations for over a decade. We earned a client Net Promoter Score of 85, a good way of telling you that our customers nearly always recommend us to others. We can help your organization with cloud native solutions. We offer skills in the following areas:

  • Developing cloud first strategies
  • Migration of workloads to the cloud
  • Implementing automation for governance and security guardrails
  • Implementing compliance controls and processes
  • Pipelines for data, infrastructure and application deployment
  • Subject matter expertise for FHIR implementations
  • Managed cloud services

Schedule time with an expert now, contact us.

-Tom James, Sr. Marketing Manager, Healthcare

rss
Facebooktwitterlinkedinmail

Inside our Minds: Avoidable Security Issues

We picked our cloud security experts’ brains to find out what the biggest security hole they’ve ever had to patch was, what one of the worst security risks they’ve found and resolved was, and what the most avoidable security issue they see often is.

rss
Facebooktwitterlinkedinmail

What is DevOps to Us?

What is DevOps to us? Our experts talk automating deployment of applications and infrastructure, including CI/CD pipelines, and the most popular DevOps tools.

rss
Facebooktwitterlinkedinmail

Fully Coded And Automated CI/CD Pipelines: The Weeds

The Why

In my last post we went over why we’d want to go the CI/CD/Automated route and the more cultural reasons of why it is so beneficial. In this post, we’re going to delve a little bit deeper and examine the technical side of tooling. Remember, a primary point of doing a release is mitigating risk. CI/CD is all about mitigating risk… fast.

There’s a Process

The previous article noted that you can’t do CI/CD without building on a set of steps, and I’m going to take this approach here as well. Unsurprisingly, we’ll follow the steps we laid out in the “Why” article, and tackle each in turn.

Step I: Automated Testing

You must automate your testing. There is no other way to describe this. In this particular step however, we can concentrate on unit testing: Testing the small chunks of code you produce (usually functions or methods). There’s some chatter about TDD (Test Driven Development) vs BDD (Behavior Driven Development) in the development community, but I don’t think it really matters, just so long as you are writing test code along side your production code. On our team, we prefer the BDD style testing paradigm. I’ve always liked the symantically descriptive nature of BDD testing over strictly code-driven ones. However, it should be said that both are effective and any is better than none, so this is more of a personal preference. On our team we’ve been coding in golang, and our BDD framework of choice is the Ginkgo/Gomega combo.

Here’s a snippet of one of our tests that’s not entirely simple:

Describe("IsValidFormat", func() {
for _, check := range AvailableFormats {
Context("when checking "+check, func() {
It("should return true", func() {
Ω(IsValidFormat(check)).To(BeTrue())
})
})
}
 
Context("when checking foo", func() {
It("should return false", func() {
Ω(IsValidFormat("foo")).To(BeFalse())
})
})
)

So as you can see, the Ginkgo (ie: BDD) formatting is pretty descriptive about what’s happening. I can instantly understand what’s expected. The function IsValidFormat, should return true given the range (list) of AvailableFormats. A format of foo (which is not a valid format) should return false. It’s both tested and understandable to the future change agent (me or someone else).

Step II: Continuous Integration

Continuous Integration takes Step 1 further, in that it brings all the changes to your codebase to a singular point, and building an object for deployment. This means you’ll need an external system to automatically handle merges / pushes. We use Jenkins as our automation server, running it in Kubernetes using the Pipeline style of job description. I’ll get into the way we do our builds using Make in a bit, but the fact we can include our build code in with our projects is a huge win.

Here’s a (modified) Jenkinsfile we use for one of our CI jobs:

def notifyFailed() {
slackSend (color: '#FF0000', message: "FAILED: '${env.JOB_NAME} [${env.BUILD_NUMBER}]' (${env.BUILD_URL})")
}
 
podTemplate(
label: 'fooProject-build',
containers: [
containerTemplate(
name: 'jnlp',
image: 'some.link.to.a.container:latest',
args: '${computer.jnlpmac} ${computer.name}',
alwaysPullImage: true,
),
containerTemplate(
name: 'image-builder',
image: 'some.link.to.another.container:latest',
ttyEnabled: true,
alwaysPullImage: true,
command: 'cat'
),
],
volumes: [
hostPathVolume(
hostPath: '/var/run/docker.sock',
mountPath: '/var/run/docker.sock'
),
hostPathVolume(
hostPath: '/home/jenkins/workspace/fooProject',
mountPath: '/home/jenkins/workspace/fooProject'
),
secretVolume(
secretName: 'jenkins-creds-for-aws',
mountPath: '/home/jenkins/.aws-jenkins'
),
hostPathVolume(
hostPath: '/home/jenkins/.aws',
mountPath: '/home/jenkins/.aws'
)
]
)
{
node ('fooProject-build') {
try {
checkout scm
 
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {
container('image-builder'){
stage('Prep') {
sh '''
cp /home/jenkins/.aws-jenkins/config /home/jenkins/.aws/.
cp /home/jenkins/.aws-jenkins/credentials /home/jenkins/.aws/.
make get_images
'''
}
 
stage('Unit Test'){
sh '''
make test
make profile
'''
}
 
step([
$class:              'CoberturaPublisher',
autoUpdateHealth:    false,
autoUpdateStability: false,
coberturaReportFile: 'report.xml',
failUnhealthy:       false,
failUnstable:        false,
maxNumberOfBuilds:   0,
sourceEncoding:      'ASCII',
zoomCoverageChart:   false
])
 
stage('Build and Push Container'){
sh '''
make push
'''
}
}
}
 
stage('Integration'){
container('image-builder') {
sh '''
make deploy_integration
make toggle_integration_service
'''
}
try {
wrap([$class: 'AnsiColorBuildWrapper', 'colorMapName': 'XTerm']) {
container('image-builder') {
sh '''
sleep 45
export KUBE_INTEGRATION=https://fooProject-integration
export SKIP_TEST_SERVER=true
make integration
'''
}
}
} catch(e) {
container('image-builder'){
sh '''
make clean
'''
}
throw(e)
}
}
 
stage('Deploy to Production'){
container('image-builder') {
sh '''
make clean
make deploy_dev
'''
}
}
} catch(e) {
container('image-builder'){
sh '''
make clean
'''
}
currentBuild.result = 'FAILED'
notifyFailed()
throw(e)
}
}
}

There’s a lot going on here, but the important part to notice is that I grabbed this from the project repo. The build instructions are included with the project itself. It’s creating an artifact, running our tests, etc. But it’s all part of our project code base. It’s checked into git. It’s code like all the other code we mess with. The steps are somewhat inconsequential for this level of topic, but it works. We also have it setup to run when there’s a push to github (AND nightly). This ensures that we are continuously running this build and integrating everything that’s happened to the repo in a day. It helps us keep on top of all the possible changes to the repo as well as our environment.

Hey… what’s all that make_ crap?_

Make

Our team uses a lot of tools. We ascribe to the maxim: Use what’s best for the particular situation. I can’t remember every tool we use. Neither can my teammates. Neither can 90% of the people that “do the devops.” I’ve heard a lot of folks say, “No! We must solidify on our toolset!” Let your teams use what they need to get the job done the right way. Now, the fear of experiencing tool “overload” seems like a legitimate one in this scenario, but the problem isn’t the number of tools… it’s how you manage and use use them.

Enter Makefiles! (aka: make)

Make has been a mainstay in the UNIX world for a long time (especially in the C world). It is a build tool that’s utilized to help satisfy dependencies, create system-specific configurations, and compile code from various sources independent of platform. This is fantastic, except, we couldn’t care less about that in the context of our CI/CD Pipelines. We use it because it’s great at running “buildy” commands.

Make is our unifier. It links our Jenkins CI/CD build functionality with our Dev functionality. Specifically, opening up the docker port here in the Jenkinsfile:

volumes: [
hostPathVolume(
hostPath: '/var/run/docker.sock',
mountPath: '/var/run/docker.sock'
),

…allows us to run THE SAME COMMANDS WHEN WE’RE DEVELOPING AS WE DO IN OUR CI/CD PROCESS. This socket allows us to run containers from containers, and since Jenkins is running on a container, this allows us to run our toolset containers in Jenkins, using the same commands we’d use in our local dev environment. On our local dev machines, we use docker nearly exclusively as a wrapper to our tools. This ensures we have library, version, and platform consistency on all of our dev environments as well as our build system. We use containers for our prod microservices so production is part of that “chain of consistency” as well. It ensures that we see consistent behavior across the horizon of application development through production. It’s a beautiful thing! We use the Makefile as the means to consistently interface with the docker “tool” across differing environments.

Ok, I know your interest is peaked at this point. (Or at least I really hope it is!)
So here’s a generic makefile we use for many of our projects:

CONTAINER=$(shell basename $$PWD | sed -E 's/^ia-image-//')
.PHONY: install install_exe install_test_exe deploy test
 
install:
docker pull sweet.path.to.a.repo/$(CONTAINER)
docker tag sweet.path.to.a.repo/$(CONTAINER):latest $(CONTAINER):latest
 
install_exe:
if [[ ! -d $(HOME)/bin ]]; then mkdir -p $(HOME)/bin; fi
echo "docker run -itP -v \$$PWD:/root $(CONTAINER) \"\$$@\"" > $(HOME)/bin/$(CONTAINER)
chmod u+x $(HOME)/bin/$(CONTAINER)
 
install_test_exe:
if [[ ! -d $(HOME)/bin ]]; then mkdir -p $(HOME)/bin; fi
echo "docker run -itP -v \$$PWD:/root $(CONTAINER)-test \"\$$@\"" > $(HOME)/bin/$(CONTAINER)
chmod u+x $(HOME)/bin/$(CONTAINER)
 
test:
docker build -t $(CONTAINER)-test .
 
deploy:
captain push

This is a Makefile we use to build our tooling images. It’s much simpler than our project Makefiles, but I think this illustrates how you can use Make to wrap EVERYTHING you use in your development workflow. This also allows us to settle on similar/consistent terminology between different projects. %> make test? That’ll run the tests regardless if we are working on a golang project or a python lambda project, or in this case, building a test container, and tagging it as whatever-test. Make unifies “all the things.”

This also codifies how to execute the commands. ie: what arguments to pass, what inputs etc. If I can’t even remember the name of the command, I’m not going to remember the arguments. To remedy, I just open up the Makefile, and I can instantly see.

Step III: Continuous Deployment

After the last post (you read it right?), some might have noticed that I skipped the “Delivery” portion of the “CD” pipeline. As far as I’m concerned, there is no “Delivery” in a “Deployment” pipeline. The “Delivery” is the actual deployment of your artifact. Since the ultimate goal should be Depoloyment, I’ve just skipped over that intermediate step.

Okay, sure, if you want to hold off on deploying automatically to Prod, then have that gate. But Dev, Int, QA, etc? Deployment to those non-prod environments should be automated just like the rest of your code.

If you guessed we use make to deploy our code, you’d be right! We put all our deployment code with the project itself, just like the rest of the code concerning that particular object. For services, we use a Dockerfile that describes the service container and several yaml files (e.g. deployment_<env>.yaml) that describe the configurations (e.g. ingress, services, deployments) we use to configure and deploy to our Kubernetes cluster.

Here’s an example:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: sweet-aws-service
stage: dev
name: sweet-aws-service-dev
namespace: sweet-service-namespace
spec:
replicas: 1
template:
metadata:
labels:
app: sweet-aws-service
name: sweet-aws-service
spec:
containers:
- name: sweet-aws-service
image: path.to.repo.for/sweet-aws-service:latest
imagePullPolicy: Always
env:
- name: PORT
value: "50000"
- name: TLS_KEY
valueFrom:
secretKeyRef:
name: grpc-tls
key: key
- name: TLS_CERT
valueFrom:
secretKeyRef:
name: grpc-tls
key: cert

This is an example of a deployment into Kubernetes for dev. That %> make deploy_dev from the Jenkinsfile above? That’s pushing this to our Kubernetes cluster.

Conclusion

There is a lot of information to take in here, but there are two points to really take home:

  1. It is totally possible.
  2. Use a unifying tool to… unify your tools. (“one tool to rule them all”)

For us, Point 1 is moot… it’s what we do. For Point 2, we use Make, and we use Make THROUGH THE ENTIRE PROCESS. I use Make locally in dev and on our build server. It ensures we’re using the same commands, the same containers, the same tools to do the same things. Test, integrate (test), and deploy. It’s not just about writing functional code anymore. It’s about writing a functional process to get that code, that value, to your customers!

And remember, as with anything, this stuff get’s easier with practice. So once you start doing it you will get the hang of it and life becomes easier and better. If you’d like some help getting started, download our datasheet to learn about our Modern CI/CD Pipeline.

-Craig Monson, Sr Automation Architect

 

rss
Facebooktwitterlinkedinmail