If you’re considering beginning a DevOps transformation, we recommend starting by identifying where you are now and where you need to improve before moving forward. This initial process highlights potential barriers that could slow the transformation or make it fail altogether. At 2nd Watch, we rely on the CALMS Model to comprehensively assess teams and processes as a starting point to a DevOps transformation.
What is the CALMS Model?
Originally developed by Damon Edwards and John Willis, and later enhanced by Jez Humble, the CALMS Model assesses your business today and throughout the DevOps transformation. The Model addresses five fundamental elements of DevOps:
- Culture: Collaborative and customer centered culture across all functions.
- Is there a top-down effort to shift culture?
- Are developers working with app owners, operations, legal, security, or any other department that will be affected?
- How does your management team support the omni-directional integration of teams?
- Do you enable the team to fail fast?
- Is the team encouraged to continuously learn?
- Automation: Remove the toil, or wasted work, with automation.
- How much toil is present?
- Is your continuous integration (CI) and continuous delivery (CD) pipeline flowing?
- Are processes automated efficiently?
- Is infrastructure automated and available on demand?
- Lean: Agile and scrappy teams focused on continuous improvement.
- Can you visualize your work in progress?
- Where can you slim down?
- Can you reduce batch sizes?
- Can you limit your work in process?
- Measurement: Data is critical to measure progress.
- What are you measuring and how?
- How do you determine progress, success, and the need to optimize?
- What are your goals?
- Sharing: All teams teach each other and feel empowered to contribute.
- How is information and tribal knowledge captured and shared across teams?
- Do teams feel comfortable sharing unsuccessful experiences for learning purposes?
- Do teams share duties and goals?
Not only is the CALMS Model easy to use and comprehensive in a DevOps transformation, but it can also be applied to any other transformational effort. If you want to reduce costs, increase speed, or receive the benefits of these changes, use the CALMS Model as your foundation to assess where you are and where you are going.
Starting small with the CALMS Model
A good way to begin any DevOps transformation is to start small. Select a group of teams willing to prove the concept and complete a CALMS assessment within their departments. Choose up to five different teams and let the results of the assessment determine the top one or two highest functioning teams. These teams will serve as your subjects for an internal case study to showcase your readiness and chart a larger scale plan.
While proof of concept is important for buy-in, it’s also valuable to bring the right people along the change curve. Invite your IT executives to get onboard with DevOps and understand the entire transformation, rather than just the tooling. It’s not just CI/CD, it’s removing barriers, implementing analytics, and, most importantly, achieving the benefits of the DevOps transformation.
The best way to bring IT executives into DevOps is to help them justify the means. Show them the reduction in costs, increase in speed, efficient resource allocation – whatever your goals may be – as well as the measurements you’ll use to prove that value. Of course, the CALMS Model has these points built-in, so it’s a good tool to use for all stakeholders in the DevOps transformation. Present your findings to IT executives and let the assessment determine if you’re ready to move forward. Not only does the CALMS Model support each DevOps value, it enables measurement both upfront and throughout your transformation allowing you to prove your return on investment.
If you want a successful DevOps transformation complete with assessment and strategy, infrastructure as code essentials, CI/CD, and cloud native application modernization, schedule a briefing with one of our DevOps experts to take the first step.
-Stefana Muller, Sr Product Manager, DevOps & Migration
DevOps is a set of cultural values and organizational practices that improve business outcomes by increasing collaboration and feedback between teams. Of course, there are industry best practices, but a DevOps transformation will look different and yield different results for each organization, depending on its business and strategy. With a lot of options and moving parts to a DevOps transformation, don’t let these myths delay beginning your transformation.
Myth #1: Tools will solve your DevOps problems.
Unfortunately, even the best tools are not going to solve all your DevOps issues. Tools are enablers that assist in removing unnecessary toil, but they can’t magically make things perfect. For instance, implementing an automation tool sounds like a great time and resource saver at first glance. However, the tool can only produce those results if the structure around the tool can accommodate the action. If your team isn’t ready for that speed, you’ll likely just speed to failure.
Don’t put the tools before the structure. Instead, think long-term and comprehensively when constructing your DevOps transformation map. Otherwise, roadblocks will slow down your ability to achieve speed.
Myth #2: You should start with CI/CD.
Typically, people begin their DevOps transformation with continuous integration (CI) and continuous delivery (CD). A CI/CD pipeline enables fast code changes through automated deployment steps that create a more consistent and agile environment. While the results of CI/CD are the goal, starting there doesn’t take into account the support necessary for successful implementation.
Today, the DevOps transformation is being refined to include discussions and planning around the evolvement of production support, application monitoring, and automated dashboards. When you start with CI/CD, you’re focused on development speed, but operations might not be ready to accommodate. In true DevOps fashion, you need to bridge the gap between development and operations first to produce a streamlined feedback loop. Operations must have their tools ready to feed into the CI/CD pipeline to break down barriers early on and avoid stopping points in the future.
Myth #3: DevOps transformation and cloud transformation can’t happen at the same time.
Promises of more speed and lower costs motivate businesses to jump into the cloud quickly, with the expectation that those benefits and return on investment (ROI) are delivered immediately. The issue with this way of thinking is not enough forethought prior to action, with a fracture resulting between departments. Teams need to be trained on new cloud procedures, security must be implemented, legal has to update contracts, and company culture, from the top down, needs to be onboard for adoption.
Fortunately, there’s a lot of overlap between a DevOps transformation and a cloud transformation. In fact, DevOps can be the support you need for a successful cloud transformation without any roadblocks. Instead of waiting on DevOps because you’re not agile yet, start with it at the beginning of your cloud transformation. Utilize DevOps best practices as you migrate to the cloud to help transform how your teams work with a well-constructed plan for company-wide implementation.
Myth #4: The role of security is for vulnerability scanning.
Waiting until you’re finished with development to include security is a DevOps anti-pattern. As an essential part of the business, security can contribute more than just vulnerability scanning before you go live. When you only look to security for the last line screening, you’re inviting a significant bottleneck into your process.
Of course, getting security involved and excited about DevOps can be a struggle because they’re inherently at odds. The goal of DevOps is to increase speed with new tools. The goal of security is to decrease risk, which can slow processes and releases. But when you apply speed and change with new implementations, you are increasing risk. Instead of asking, “How do we get security involved before vulnerability scanning?” consider the benefit of getting development to include security as part of the CI/CD pipeline. Automate steps like vulnerability scanning, secrets detection, license checks, SAST & DAST early in the development cycle so that issues are found and addressed early on. This removes the security roadblock to production.
In addition, cross-training between DevOps and security invites both teams to understand their colleagues’ goals, responsibilities, roles, expectations, risks, and challenges. Give each team real life examples of how the gap creates conflict for each side. Once they have a better understanding of the other side, they’re more likely to consider one another during product planning and development.
One well-known and widely accepted truth to a successful DevOps transformation is the benefit of an expert partner in the process. 2nd Watch offers packaged service offerings to help you get the most out of your DevOps transformation with start to finish essentials that deliver the results you expect. Contact Us to see how you can gain more leverage with less risk.
-Stefana Muller, Sr Product Manager, DevOps & Migration
When your organization needs to address a specific problem, change the status quo, follow new trends, add a premier service, etc., it requires an approach that leads to success. A DevOps Transformation is the modern IT leader’s choice for achieving the speed and innovation needed to meet today’s market demands.
What is DevOps?
DevOps is a set of practices that combines software development, ‘Dev’, and IT operations, ‘Ops.’ The term was spurred from the initial struggle that existed between these two vital pieces of product creation. However, as our technology has expanded and become more business focused, so has the term.
Initially, businesses were mostly focused on making development and operations work better together by providing appropriate processes and specific automation technology. However, over the nearly 12 years that this term has been in existence, businesses have grown, and we’ve realized that the struggle doesn’t just exist between dev and ops, but rather the entire structure of the business.
How has DevOps evolved?
DevOps grew from the agile transformation effort where companies had to speed up, find new ways to develop software, and go-to-market quicker in order to remain competitive. Although agile was initially a development methodology, it makes sense that we now use it as part of a DevOps Transformation, expanding the use of agile across business areas to help companies quickly deliver the highest value to their customers.
Another big evolution in DevOps is how information should flow. DevOps used to encourage a bi-directional exchange between developers and operations, but now it’s omni-directional throughout the entire organization including departments like security, finance, marketing, product management, and business-line owners. It’s important to expand the feedback loop to and from all the stakeholders in an organization – including your customers – in order to fully deliver high value products or services.
If departments are unable or unwilling to share their wins and losses, communicate candidly, and exchange information, the transformation will hit barrier after barrier. This is especially important for project management, which has to change its processes drastically in a DevOps Transformation in comparison to a department-specific or smaller scale project. With the main objective being speed, planning is critical.
Who is involved in a DevOps Transformation?
The short answer is everyone. When you go through a DevOps Transformation, your company is essentially speeding things up. Typically, the action takes place in the application development and cloud operations space because that’s where you develop and deploy new products or features. But it’s not just the traditional Dev and Ops teams being affected by the transformation. Executives need to be all-in to encourage cooperation throughout the organization. Finance has to transact quicker, sales needs to sell differently, marketing has to understand new features and find the best ways to promote them, and legal has to update contracts with clients and providers to enable the rapid change brought by this model.
Today’s DevOps Transformation affects almost every business area in an organization, including cultural values, organizational practices, and improving business outcomes by increasing collaboration and feedback. This is why we refer to it as a transformation rather than an implementation – it’s really going to change your entire business.
Why are so many companies choosing DevOps?
A DevOps Transformation isn’t the only way to approach change management, but beneficial outcomes are making it a popular one.
- Streamlined processes: Remember that the foundation of DevOps is built on removing the struggle between departments. Enabling teams to cross-innovate, by eliminating barriers and encouraging a culture of constant innovation, propels your business forward faster.
- Challenges resolved: When your approach is based on removing issues and facilitating an omni-directional exchange of information, you get an unencumbered view of what those issues are and can then work toward resolution.
- Specific benefits achieved: DevOps is used to target specific goals, and with the measurement necessary to determine success, IT leaders inherently receive a slew of additional benefits. For example, if you’re using DevOps to address high development costs, you won’t only accomplish cost reduction, but also faster time to market and higher security in your development cycle.
- Valuable data collected: DevOps requires you to measure, report, and be transparent with everything you do. These insights not only aid the DevOps Transformation currently in focus, but also future initiatives.
- Elevated customer experience: Delighting your customers with new, stable technology on a frequent basis will contribute to retention and new business growth.
2nd Watch has developed a number of DevOps Transformation services to aid enterprises in their transformations, including assessment and strategy, training and tooling, continuous integration and continuous delivery (CI/CD), cloud native application modernization, and full DevOps management. Get more out of your DevOps Transformation while becoming self-sufficient for the future. Contact Us to discuss how a DevOps Transformation can help you achieve your goals.
-Stefana Muller, Sr Product Manager, DevOps & Migration
It’s not uncommon for us to hear from clients that the “thing” holding organizations back from fully migrating to the cloud is their data. The reason given is that it is too valuable of an asset to move. We believe, however, that it is too valuable an asset not to move to the cloud. Keeping your database on-premises holds your business back from making the coveted “full digital transformation.”
The fact is, legacy on-premises databases put a considerable burden on IT infrastructure, staffing resources and data security. Most organizations want their DBAs to spend their time on optimizing their database performance, ensuring they have the correct high-level architecture and experimenting, however older databases can cause administrators to expend considerable time on troubleshooting, capacity and performance issues.
There are several options available from AWS and Azure for moving your database to the cloud. A simple lift-and-shift of your on-premises database to a cloud instance will allow you the most flexibility to continue working with your database in a fashion you are most familiar with. You maintain complete control of the operating system, database version and administration as you currently do, but you have the advantages of the cloud with the ability to add CPUs, cores or capacity on demand.
Alternatively, you can move to a Managed Database Service. These services deliver fully optimized infrastructure specific to your choice of database to improve performance and ensure you are running on the latest version with the newest features. They also lower your Total Cost of Ownership (TCO) by performing many of the administrative tasks, such as automating patching and back-ups while guaranteeing high availability with multiple 9’s of uptime.
However, because your data is an invaluable asset to your organization, it is understandable that there is some hesitance to undertake the transformation. Migrating the application database is perhaps the most critical step in any workload migration. Maintaining integrity and availability of data is critical, and the time taken to synchronize data between old and new systems may determine the duration of any service disruption during migration.
This is where it helps to partner with a services provider like 2nd Watch. Over the last decade, we have developed a proven methodology to help our clients overcome the challenges and hesitancy of moving one of their most precious assets, their database, to the cloud. Our Database Migration Service comprehensively examines your databases and applications and maps out all the dependencies. We work with you to understand unique requirements, such as the impact of latency on your applications’ performance, to ensure the migration is seamless to your customers.
Migrating your database to the cloud creates new opportunities to derive value from data, utilize modern BI and AI tools, and reduce the cost and management overhead of traditional on-premises solutions. It can increase the time database architects and developers are able to spend on high-value projects, such as developing new applications and applying advanced analytics. Whether you are running MS SQL Server, MySQL, PostgreSQL, MariaDB or Oracle Database, 2nd Watch can help you migrate your database to your cloud platform of choice.
To learn more, download our datasheet for details on migrating your database to Microsoft Azure or AWS.
-Dusty Simoni, Sr Product Manager
Everyone’s journey to the cloud is different. Before deciding your direction, you should consider your business goals, risk tolerance, internal skills, cost objectives, and existing technology ecosystem. For some, the choice is a 100% native cloud-first strategy on a single Cloud Service Provider (CSP). Others will use a mixture of services across multiple providers. And some others will choose a hybrid strategy in some form. For a hybrid approach, an interesting option worth considering is leveraging VMware Cloud (VMC) on AWS.
VMware Cloud on AWS is a great solution to consider whether you are integrating your on-prem work environment into the cloud, evacuating your datacenter, scaling datacenter extensions, looking at disaster recovery (DR), or focusing on remote workforce enablement.
What is VMware Cloud on AWS?
About three years ago, hundreds of engineers from VMware and AWS spent more than two years bringing the VMware Cloud solution to market. VMware Cloud on AWS refers to the VMware infrastructure stack or VMware cloud foundation. It encompasses the three infrastructure software pieces that VMware is known for: vSphere, NSX and vSAN. vSphere provides virtualization of compute, NSX is virtualization of the network, and vSAN virtualizes storage. VMC is an instance of the vCloud foundation being executed on AWS bear metal hardware. When you sign up for a VMware Cloud account, you can get access to the entire VMware stack in an AWS availability zone in just 90 minutes.
Traditionally, VMware has been in datacenters. Now, you can combine those servers into one piece of hardware. With AWS, you can now move functionality to the cloud and enjoy the many benefits of this platform.
1. Expanded functionality
There is so much more functionality in the VMware stack than in the cloud alone. There’s also more functionality in the cloud than you can build in your own environment. VMware Cloud on AWS is more than just a traditional VMware stack. It’s all the functionality of NSX, vSAN, and vSphere, plus the latest additions, at your fingertips, allowing you to always run the latest version of VMware to have access to the newest features. VMware takes care of the maintenance, upgrading, and patching, and with VMC being placed in AWS, you have instant access to all of the AWS cloud features in close physical proximity to your application, allowing you to experience improved performance.
2. Easy adoption
If you’re new to the cloud and have experience with VMware, you will easily be able to apply those existing on-prem skills to VMC on AWS. Because vShere on-prem is the same as the vSphere on AWS, it’s backwards compatible. The traditional management interface of the vCenter has the same look and feel and operates the same in the cloud as it does on-prem. These mirrored interfaces allow you to preserve the investment you have made in your existing VMware administrators, keeping headcount and employee costs down because you don’t have to hire for new skills or ask existing techs to increase their skillset. This quick familiarity lets you ramp up and use the service much faster than bringing in a completely new platform.
3. Agile scaling capability
After COVID-19 safety precautions sent 80-90% of the workforce home, organizations scrambled to enable and protect their new remote workers. Datacenters and BDi farms weren’t built to scale for the influx, and it’s just not possible to build additional datacenters as fast as necessary. Organizations needed to find already-built hardware and available datacenters and software that could meet their needs quickly. VMC on AWS solves the problem because it is built to scale without the limitations of on-prem environments.
4. Transition from CAPEX to OPEX
A fundamental change people are seeing from VMC on AWS is the ability to move from a capital expenditures (CAPEX) model to an operating expenditures (OPEX) model, freeing you from exceptionally long and expensive contracts for datacenters and DR locations.
With VMC, you can move to an OPEX model and spread your cost out over time, and the hardware, maintenance, and upgrades are no longer your responsibility. On top of that, the savings in headcount, manpower, and man hours creates a conversation between IT and financial staff as to what’s best for the overall organization.
5. Lower costs
Chances are, you’re already using VMware and recognize it as a premium brand, so if you’re looking at cost solely from a compute point of view, it might appear as if costs are higher. However, if you add up the individual expenses you incur without VMC – including real estate, hardware, software maintenance, headcount, management, travel costs – and compare that to VMC on AWS, you see the cost benefit ratio in favor of VMC. And additional resources are saved when you consider all the management roles that are no longer your responsibility. VMware also offers a hybrid loyalty program with incentives and savings for customers who are already invested in the VMware ecosystem.
2nd Watch holds the VMware Cloud on AWS Master Services Competency. If you’re considering the next step in your cloud journey, Contact Us to learn more about our team of VMware Cloud experts, available to help you navigate the best platform for your goals.
If there’s one thing IT professionals can agree on, it’s that hybrid cloud computing isn’t going away. Developed in response to our growing dependence on data, the hybrid cloud is being embraced by enterprises and providers alike.
What is hybrid cloud computing?
Hybrid cloud computing can be a combination of private cloud, like VMware, and public cloud; or it can be a combination of cloud providers, like AWS, Azure and Google Cloud. Hybrid cloud architecture might include a managed datacenter or a company’s own datacenter. It could also include both on-prem equipment and cloud applications.
Hybrid cloud computing gained popularity alongside the digital transformation we’ve witnessed taking place for years. As applications evolve and become more dev-centric, they can be stored in the cloud. At the same time, there are still legacy apps that can’t be lifted and shifted into the cloud and, therefore, have to remain in a datacenter.
Ten years ago, hybrid and private clouds were used to combat growth, but now we’re seeing widespread adoption from service providers to meet client needs. The strategy has range from on-prem up to the cloud (VMware Cloud (VMC) on AWS), to cloud-down (AWS Outposts), to robust deployment and management frameworks for any endpoint (GCP Anthos).
With that said, for many organizations data may never entirely move to the cloud. A company’s data is their ‘secret sauce,’ and despite the safety of the cloud, not everything lends itself to cloud storage. Depending on what exactly the data is –mainframes, proprietary information, formulas – some businesses don’t feel comfortable with service providers even having access to such business-critical information.
One major reason companies move to the cloud is the large amount of data they are now storing. Some companies might not be able to, or might not want to, build and expand their datacenter as quickly as the business and data requires.
With the option for unlimited storage the cloud provides, it is an easy solution. Rather than having to forecast data growth, prioritize storage, and risk additional costs, a hybrid strategy allows for expansion.
The cloud is, in most cases, far more secure than on-prem. However, especially when the cloud first became available, a lot of companies were concerned about who could see their data, potential for leaks, and how to guarantee lockdown. Today, security tools have vastly improved, visibility is much better, and the compliance requirements for cloud providers include a growing number of local and federal authorities. Additionally, third party auditors are used to verify cloud provider practices as well as internal oversight to avoid a potentially fatal data breach. Today, organizations large and small, across industries, and even secret government agencies trust the cloud for secure data storage.
It’s also important to note that the public cloud can be more secure than your own datacenter. For example, if you try to isolate data in your own datacenter or on your own infrastructure, you might find a rogue operator creating shadow IT where you don’t have visibility. With hybrid cloud, you can take advantage of tools like AWS Control Tower, Azure Sentinel, AWS Landing Zone blueprints, and other CSP security tools to ensure control of the system. Similarly, with tooling from VMware and GCP Anthos you can look to create single policy and configuration for environment standardization and security across multiple clouds and on-prem in a single management plane.
Hybrid cloud computing is a great option when it comes to cost. On an application level, the cloud lets you scale up or down, and that versatility and flexibility can save costs. But if you’re running always-on, stagnant applications in a large environment, keeping them in a datacenter can be more cost effective. One can make a strong case for a mixture of applications being placed in the public cloud while internal IP apps remain in the datacenter.
You also need to consider the cost of your on-prem environment. There are some cases, depending on the type and format of storage necessary, where the raw cost of a cloud doesn’t deliver a return on investment (ROI). If your datacenter equipment is running near 80% or above utilization, the cost savings might be in your favor to continue running the workload there. Alternately, you should also consider burst capacity as well as your non-consistent workloads. If you don’t need something running 24/7, the cloud lets you turn it off at night to deliver savings.
Bonus Reason – Consistency of management tooling and staff skills
The smartest way to move forward with your cloud architecture – hybrid or otherwise – is to consult with cloud computing experts. 2nd Watch helps you choose the most efficient strategy for your business, aids in planning and completing migration in an optimized fashion, and secures your data with comprehensive cloud management. Contact Us to take the next step in your cloud journey.
-Dusty Simoni, Sr Product Manager, Hybrid Cloud