1-888-317-7920 info@2ndwatch.com

Cloud Center of Excellence: 3 Foundational Areas with 4 Phases of Maturity

A cloud center of excellence (CCoE) is essential for successful, efficient, and effective cloud implementation across your organization. Although the strategies look different for each business, there are three areas of focus, and four phases of maturity within those areas, that are important markers for any CCoE.

1. Financial Management

As you move to the public cloud and begin accessing the innovation and agility offered, it comes with the potential for budget overruns. Without proper planning and inclusion of financial leaders, you may find you’re not only paying for datacenters, but you’re also racking up large, and growing, public cloud bills. Financial management needs to be centrally governed, but extremely deliberate because it touches hundreds of thousands of places across your organization.

You may think involving finance will be painful but brining all stakeholders to the table equally has proven highly effective. Over the last five years, there’s been a revolution in how finance can effectively engage in cloud and infrastructure management. This emerging model, guided by the CCoE, enables organizations to justify leveraging the cloud, not only based on agility and innovation, but also cost. Increasingly, organizations are achieving both better economics and gaining the ability to do things in the cloud that cannot be done inside datacenters.

2. Operations

To harness the power and scale possible in the cloud, you need to put standards and best practices in place. These often start around configuration – tagging policies, reference architectures, workloads, virtual machines, storage, and performance characteristics. Standardization is a prerequisite to repeatability and is the driving force behind gaining the best ROI from the cloud.

Today, we’re actually seeing that traditional application of the cloud does not yield the best economic benefits available. For decades, we accepted an architectural model where the operating system was central to the way we built, deployed, and managed applications. However, when you look beyond the operating system, whether it’s containers or the rich array of platform services available, you start to see new opportunities that aren’t available inside datacenters.

When you’re not consuming the capital expenditure for the infrastructure you have available to you, and you’re only consuming it when you need it, you can really start to unlock the power of the cloud. There are many more workloads available to take advantage of as well. The more you start to build cloud native, or cloud centric architecture, the more potential you have to maximize financial benefits.

3. Security and Compliance

Cloud speed is fast. Much faster than what’s possible in datacenters. Avoid a potentially fatal breach,  data disruption, or noncompliance penalty with strict security and compliance practices. You should be confident in the tools you implement throughout your organization, especially where the cloud is being managed day to day and changes are being driven. With each change and new instance, make sure you’re following the CCoE recommendations with respect to industry, state, and federal compliance regulations.

4-Phase Cloud Maturity Model

CloudHealth put forward a cloud maturity model based on patterns observed in over 10,000 customer interactions in the cloud. Like a traditional maturity model, the bottom left represents immaturity in the cloud, and the upper right signifies high maturity. Within each of the three foundational areas – financial management, operations, and security and compliance – an organization needs to scale and mature through the following four phases.

Phase 1: Visibility

Maturity starts at the most basic level by gaining visibility into your current architecture. Visibility gives you the connective tissue necessary to make smart decisions – although it doesn’t actually make those decisions obvious to you. First, know what you’re running, why you’re running it, and the cost. Then, analyze how it aligns with your organization from a business perspective.

Phase 2: Optimization

The goal here is all around optimization within each of the three areas. In regards to financial management and operations, you need to size a workload appropriately to support demand, but without going over capacity. In the case of security, optimization is proactively monitoring all of the hundreds of thousands of changes that occur across the organization each day. The strategy and tools you use to optimize must be in accordance with the best practices in your standards and policies.

Phase 3: Governance and Automation

In this phase you’re moving away from just pushing out dashboards, notification alerts, or reports to stakeholders. Now, it’s about strategically monitoring for the ideal state of workloads and applications in your business services. How do you automate the outcomes you want? The goal is to keep it in the optimum state all the time, or nearly all the time, without manual tasks and the risks of human error.

Phase 4: Business Integration

This is the ultimate state where the cloud gets integrated with your enterprise dashboards and service catalogue, and everything is connected across the organization. You’re no longer focused on the destination of the cloud. Instead, the cloud is just part of how you transact business.

As you move through each phase, establish measurements of cloud maturity using KPIs and simple metrics. Enlist the help of a partner like 2nd Watch that can provide expertise, automation, and software so you can achieve better business outcomes regardless of your cloud goals. Contact Us to understand how our cloud optimization services are maximizing returns.

-Chris Garvey, EVP of Product

Facebooktwitterlinkedinmailrss

Building Your Cloud Center of Excellence

You’ve migrated to the cloud and are using cloud services within your own team, but how do you scale that across the organization? A Cloud Center of Excellence (CCoE) is the best way to scale your usage of the cloud across multiple teams, especially when navigating organizational complexity.

What is a CCoE?

A Cloud Center of Excellence, or CCoE, is a group of cross functional business leaders who collaboratively drive the best practices and standards that govern the cloud implementation strategy across their organization – developed in response to changes in the cloud. Pre-cloud, all of our infrastructure, usage, and deployments of applications were controlled by central IT. Typically, the IT department both made the infrastructure and applications available and had control over management. Now, in the post-cloud world, management in large enterprises is occurring in hundreds or thousands of places across the organization – rather than solely in central IT. Today’s cloud moves at a pace much faster than what we saw inside traditional datacenters, and that speed requires a new governance.

This seismic shift in responsibility and business-wide impact has brought both agility and innovation across organizations, but it can also introduce a fair amount of risk. A CCoE is a way to manage that risk with clear strategy development, governance, and buy-in from the top down. Utilizing stakeholders from finance and operations, architecture and security, a CCoE does not dictate or control cloud implementation, but uses best practices and standards throughout the organization to make cloud management more effective.

Getting started with a CCoE

First and foremost, a CCoE cannot start without recognizing the need for it. If you’re scaling in the public cloud, and you do not require and reinforce best practices and standards, you will hit a wall. Without a CCoE, there will be a tipping point at which that easy agility and innovation you received leveraging the public cloud suddenly turns against you. A CCoE is not a discretionary mechanism, it’s actually a prerequisite to scaling in the cloud successfully.

Once you know the significance and meaning of your CCoE, you can adapt it to the needs of your business and the state of your maturity. You need a clear understanding of both how you’re currently using the cloud, as well as how you want to use it going forward.

In doing that, you also need to set appropriate expectations. Over time, what you need and expect from a CCoE will change. Based on size, market, goals, compliance regulations, stakeholder input, etc., the job of a CCoE is to manage cloud implementation while avoiding risk. The key to a successful CCoE is balancing providing agility, innovation, and all the potential benefits of the cloud in a way that does not adversely impact your team’s ability to get things done. Even though the CCoE is driving strategy from the top, your employees need the freedom to make day-to-day management decisions, provision what they need and want, and use the agility provided by the cloud to be creative. It’s a fluid process much different from the rigid infrastructure planning of rack and stack used a decade ago.

Create an ongoing process with returns by partnering with a company who knows what you need not only today, but in the future. The right partner will provide the products, people and services that enable you to be successful. With all the complexity going on in the cloud, it’s extremely difficult to navigate and scale without an experienced expert.

2nd Watch Cloud Advisory Services include a Cloud Readiness Assessment to evaluate your current IT estate, as well as a Cloud Migration Cost Assessment that estimates costs across various cloud providers. As a trusted advisor, we’re here to answer key questions, define strategy, manage change, and provide impartial advice on a wide range of issues critical to successful cloud modernization. Contact Us to see how we can make your CCoE an organizational success.

-Chris Garvey, EVP of Product

Facebooktwitterlinkedinmailrss

Top 10 Cloud Optimization Best Practices

Cloud optimization is a continuous process specific to a company’s goals, but there are some staple best practices all optimization projects should follow. Here are our top 10.

1. Begin with the end in mind.

Business leaders and stakeholders throughout the organization should know exactly what they’re trying to achieve with a cloud optimization project. Additionally, this goal should be revisited on a regular basis to make sure you remain on track to achievement. Create measures to gauge success at different points and follow the agreed upon order of operations to complete the process.

2. Create structure around governance and responsibility.

Overprovisioning is one of the most common issues adding unnecessary costs to your bottom line. Implement specific and regulated structure around governance and responsibility for all teams involved in optimization to control any unnecessary provisioning. Check in regularly to make sure teams are following the structure and you only have the tools you need and are actively using.

3. Get all the data you need.

Cloud optimization is a data-driven exercise. To be successful, you need insight into a range of data pieces. Not only do you need to identify what data you need and be able to get it, but you also need to know what data you’re missing and figure out how to get it. Collaborate with internal teams to make sure essential data isn’t siloed or already being collected. Additionally, regularly clean and validate data to ensure reliability for data-based decision making.

4. Implement tagging practices.

To best utilize the data you have, organizing and maintaining it with strict tagging practices in necessary. Implement a system that works from more than just a technical standpoint. You can also use tagging to launch instances, control your auto parking methodology, or in scheduling. Tagging helps you understand the data and see what is driving spend. Whether it’s an environment tag, owner tag, or application tag, tagging provides clarity into spend, which is the         key to optimization.

5. Gain visibility into spend.

Tagging is one way to see where your spend is going, but it’s not the only way required. Manage accounts regularly to make sure inactive accounts aren’t continuing to be billed. Set up an internal mechanism to review with your app teams and hold them accountable. It can be as simple as a dashboard with tagging grading, as long as it lets the data speak for itself.

6. Hire the right technical expertise.

Get more out of your optimization with the right technical expertise on your internal team. Savvy technicians should work alongside the business teams to drive the goals of optimization throughout the process. Without collaboration between these departments, you risk moving in differing directions with multiple end goals in mind. For example, one team might be acting with performance or a technical aspect in mind without realizing the implication on optimization. Partnering with optimization experts can also keep teams aligned and moving toward the same goal.

7. Select the right tools and stick with them.

Tools are a part of the optimization process, but they can’t solve problems alone. Additionally, there are an abundance of tools to choose from, many of which have similar functionality and outcomes. Find the right tools for your goals, facilitate adoption, and give them the time and data necessary to produce results. Don’t get distracted by every new, shiny tool available and the “tool champions” fighting for one over another. Avoid the costs of overprovisioning by checking usage regularly and maintaining the governance structure established throughout your teams.

8. Make sure your tools are working.

Never assume a tool or a process you’ve put in place is working. In fact, it’s better to assume it’s not working and consistently check its efficiency. This regular practice of confirming the tools you have are both useful and being used will help you avoid overprovisioning and unnecessary spending. For tools to be effective and serve their purpose, you need enough visibility to determine how the tool is contributing to your overall end goal.

9. Empower someone to drive the process.

The number one call to action for anyone diving into optimization is to appoint a leader. Without someone specific, qualified, and active in managing the project with each stakeholder and team involved, you won’t accomplish your goals. Empower this leader internally to gain the respect and attention necessary for employees to understand the importance of continuous optimization and contribute on their part.

10. Partner with experts.

Finding the right partner to help you optimize efficiently and effectively will make the process easier at every turn. Bringing in an external driver who has the know-how and experience to consult on strategy through implementation, management, and replication is a smart move with fast results.

2nd Watch takes a holistic approach to cloud optimization with a team of experienced data scientists and architects who help you maximize performance and returns on your cloud assets. Are you ready to start saving? Let us help you define your optimization strategy to meet your business needs and maximize your results. Contact Us to take the next step in your cloud journey.

-Willy Sennott, Optimization Practice Manager

Facebooktwitterlinkedinmailrss

Ten Years In: Enterprise DevOps Evolves

DevOps has undergone significant changes since the trend began more than a decade ago. No longer limited to a grassroots movement among ‘cowboy’ developers, DevOps has become synonymous with enterprise software releases. In our Voice of the Enterprise: DevOps, Workloads and Key Projects 2020 survey, we found that 90% of companies that had deployed applications to production in the last year had adopted DevOps across some teams (55%) or entirely across the IT organization (40%). Another 9% were in discovery phases or PoC with their DevOps implementation, leaving only a tiny fraction of respondents reporting no adoption of DevOps.

DevOps is driven by the need for faster releases, more efficient IT operations and flexibility to respond to changes in the market, whether technical such as the advent of cloud-native technologies, or other, such as the Covid-19 pandemic. Still, one of the biggest drivers of the trend and a primary reason DevOps has become part and parcel of enterprise software development and deployment is adoption from the top-down. IT management and executive leadership are increasingly interested and involved in DevOps deployments, often because it is a critical part of cloud migration, digital transformation and other key initiatives. Most organizations also report that their DevOps implementation is managed or sanctioned by the organization, in line with the departure from shadowy IT DevOps deployments of 5 or 10 years ago toward approved deployments that meet policy, security and compliance requirements.

Another significant change in DevOps is the growing role of business objectives and outcomes. Organizations are measuring and proving their DevOps success not only using technical metrics such as quality (47%) and application performance (44%), but also business metrics such as customer satisfaction (also 44%), according to our VotE DevOps study. We also see line-of-business managers among important stakeholders in DevOps beyond developers and IT operators. The increased focus and priority on business also often translates to a different view on DevOps and IT operations in general. While IT administration has traditionally been a budget spending item with a focus on total cost of ownership (TCO), today’s enterprises are increasingly viewing DevOps and IT ops as a competitive advantage that will bring return on investment (ROI).

Another significant aspect of DevOps today is the stakeholder spread. Our surveys have consistently highlighted how security, leadership, traditional IT administrators and business/product managers play an increasingly important role in DevOps, in addition to software developers and IT operations teams. As DevOps spreads to more teams and applications within an organization, it is more likely to pull in these and other key stakeholders, including finance or compliance, among others. We also see additional people and teams, such as those in sales and marketing or human relations, becoming more integral to enterprise DevOps as the trend continues to evolve.

The prominence of security among primary DevOps stakeholders is indicative of the rapidly evolving DevSecOps trend, whereby security elements are integrated into DevOps workflows. Our data highlights how a growing number of DevOps releases include security elements, with 64% of companies indicating they do include security elements in 2020, compare to 53% in 2019. DevSecOps is being driven mainly by changing attitudes among software developers, who are increasingly less likely to think the security will slow them down and more likely to tie security to quality, which is something they care about. Software security vendors have also worked to make security tooling such as API firewalls, vulnerability scanning and software composition analysis (SCA) more integrated and automated so they really don’t slow down developers. Finally, the frequency of high-profile security incidents and breaches remind everyone of the need to reduce risk as much as possible.

Another change in DevOps is an increasing awareness and appreciation of not just technology challenges, but also cultural aspects. Our data indicates top cultural challenges of DevOps include overcoming resistance to change, competing/conflicting priorities and resources, promoting communication and demonstrating equity of benefits/costs. By aligning objectives, priorities and desired outcomes, teams can better address these cultural challenges to succeed and spread their DevOps implementations. This is also where we’ve seen cross-discipline experience – in development, in IT operations, in security, etc. – can be integral to addressing cultural issues.

If you haven’t yet begun your own DevOps Transformation, 2nd Watch takes an interesting approach you can consider. Their DevOps Transformation process begins with a complete assessment and strategy measuring your current software development and operational maturity, using the CALMS model, and developing a strategy for where and how to apply DevOps approaches

Jay Lyman, Senior Research Analyst, Cloud Native and Applied Infrastructure & DevOps at 451 Research, part of S&P Global Market Intelligence

Facebooktwitterlinkedinmailrss

3 Productivity-Killing Data Problems and How to Solve Them

With the typical enterprise using over 1,000 Software as a Service applications (source: Kleiner Perkins), each with its own private database, it’s no wonder people complain their data is siloed. Picture a thousand little silos, all locked up!

Number of cloud applications used per enterprise, by industry vertical

Then, imagine you start building a dashboard out of all those data silos. You’re squinting at it and wondering, can I trust this dashboard? You placate yourself because at least you have data to look at, but this creates more questions for which data doesn’t yet exist.

If you’re in a competitive industry, and we all are, you need to take your data analysis to the next level. You’re either gaining competitive advantage over your competition or being left behind.

As a business leader, you need data to support your decisions. These three data complexities are at the core of every leader’s difficulties with gaining business advantages from data:

  1. Siloed data
  2. Untrustworthy data
  3. No data

 

  1. Siloed data

Do you have trouble seeing your data at all? Are you mentally scanning your systems and realizing just how many different databases you have? A recent customer of ours was collecting reams of data from their industrial operations but couldn’t derive the data’s value due to the siloed nature of their datacenter database. The data couldn’t reach any dashboard in any meaningful way. It is a common problem. With enterprise data doubling every few years, it takes modern tools and strategies to keep up with it.

For our customer, we started with defining the business purpose of their industrial data – to predict demand in the coming months so they didn’t have a shortfall. That business purpose, which had team buy-in at multiple corporate levels, drove the entire engagement. It allowed us to keep the technology simple and focused on the outcome.

One month into the engagement, they had clean, trustworthy, valuable data in a dashboard. Their data was unlocked from the database and published.

Siloed data takes some elbow grease to access, but it becomes a lot easier if you have a goal in mind for the data. It cuts through noise and helps you make decisions more easily if you know where you are going.

  1. Untrustworthy data

Do you have trouble trusting your data? You have a dashboard, yet you’re pretty sure the data is wrong, or lots of it is missing. You can’t take action on it, because you hesitate to trust it. Data trustworthiness is a prerequisite for making your data action oriented. But, most data has problems – missing values, invalid dates, duplicate values, and meaningless entries. If you don’t trust the numbers, you’re better off without the data.

Data is there for you to take action on, so you should be able to trust it. One key strategy is to not bog down your team with maintaining systems, but rather use simple, maintainable, cloud-based systems that use modern tools to make your dashboard real.

  1. No data

Often you don’t even have the data you need to make a decision. “No data” comes in many forms:

  • You don’t track it. For example, you’re an ecommerce company that wants to understand how email campaigns can help your sales, but you don’t have a customer email list.
  • You track it but you can’t access it. For example, you start collecting emails from customers, but your email SaaS system doesn’t let you export your emails. Your data is so “siloed” that it effectively doesn’t exist for analysis.
  • You track it but need to do some calculations before you can use it. For example, you have a full customer email list, a list of product purchases, and you just need to join the two together. This is a great place to be and is where we see the vast majority of customers.

That means finding patterns and insights not just within datasets, but across datasets. This is only possible with a modern, cloud-native data lake.

The solution: define your business need and build a data lake

Step one for any data project – today, tomorrow and forever – is to define your business need.

Do you need to understand your customer better? Whether it is click behavior, email campaign engagement, order history, or customer service, your customer generates more data today than ever before that can give you clues as to what she cares about.

Do you need to understand your costs better? Most enterprises have hundreds of SaaS applications generating data from internal operations. Whether it is manufacturing, purchasing, supply chain, finance, engineering, or customer service, your organization is generating data at a rapid pace.

(AWS :What is a Data Lake?)

Don’t be overwhelmed. You can cut through the noise by defining your business case.

The second step in your data project is to take that business case and make it real in a cloud-native data lake. Yes, a data lake. I know the term has been abused over the years, but a data lake is very simple; it’s a way to centrally store all (all!) of your organization’s data, cheaply, in open source formats to make it easy to access from any direction.

Data lakes used to be expensive, difficult to manage, and bulky. Now, all major cloud providers (AWS, Azure, GCP) have established best practices to keep storage dirt-cheap and data accessible and very flexible to work with. But data lakes are still hard to implement and require specialized, focused knowledge of data architecture.

How does a data lake solve these three problems?

  1. Data lakes de-silo your data. Since the data stored in your data lake is all in the same spot, in open-source formats like JSON and CSV, there aren’t any technological walls to overcome. You can query everything in your data lake from a single SQL client. If you can’t, then that data is not in your data lake and you should bring it in.
  2. Data lakes give you visibility into data quality. Modern data lakes and expert consultants build in a variety of checks for data validation, completeness, lineage, and schema drift. These are all important concepts that together tell you if your data is valuable or garbage. These sorts of patterns work together nicely in a modern, cloud-native data lake.
  3. Data lakes welcome data from anywhere and allow for flexible analysis across your entire data catalog. If you can format your data into CSV, JSON, or XML, then you can put it in your data lake. This solves the problem of “no data.” It is very easy to create the relevant data, either by finding it in your organization, or engineering it by analyzing across your data sets. An example would be joining data from Sales (your CRM) and Customer Service (Zendesk) to find out which product category has the best or worst customer satisfaction scores.

The 2nd Watch Dataops Foundation Platform

You should only build a data lake if you have clear business outcomes in mind. Most cloud consulting partners will robotically build a bulky data lake without any thought to the business outcome. What sets 2nd Watch apart is our focus on your business needs. Do you need to make better decisions? Speed up a process? Reduce costs somewhere? We keep your goal front and center throughout the entire engagement. We’ve deployed data lakes dozens of times for enterprises with this unique focus in mind.

Our ready-to-deploy data lake captures years of cloud experience and best practices, with integration from governance to data exploration and storage. We explain the reasons behind the decisions and make changes based on your requirements, while ingesting data from multiple sources and exploring it as soon as possible. In the above image, the core of the data lake are the three zones represented by green S3 bucket squares.

Here is a tour of each zone:

  • Drop Zone: As the “single source of truth,” this is a copy of your data in its most raw format, always available to verify what the actual truth is. Place data here with minimal or no formatting. For example, you can take a daily “dump” of a relational database in CSV format.
  • Analytics Zone: To support general analytics, data in the Analytics Zone is compressed and reformatted for fast analytics. From here, you can use a single SQL Client, like Athena, to run SQL queries over your entire enterprise dataset — all from a single place. This is the core value add of your data lake.
  • Curated Zone: The “golden” or final, polished, most-valued datasets for your company go here. This is where you save and refresh data that will be used for dashboards or turned into visualizations.

Our Classic 3-zone data lake on S3 features immutable data by default. You’ll never lose data, nor do you have to configure a lot of settings to accomplish this. Using AWS Glue, data is automatically compressed and archived to minimize storage costs. Convenient search with always-up-to-date data catalog allows you to easily discover all your enterprise datasets.

In the Curated Zone, only the most important “data marts” – approved datasets – get loaded into more costly Redshift or RDS, minimizing costs and complexity. And with Amazon SageMaker, tapping into your Analytics and Curated Zone, you are prepared for effective machine learning. One of the most overlooked aspects of machine learning and advanced analytics is the great importance of clean, available data. Our data lake solves that issue.

If you’re struggling with one of these three core data issues, the solution is to start with a crisp definition of your business need, and then build a data lake to execute on that need. A data lake is just a central repository for flexible and cheap data storage. If you focus on keeping your data lake simple and geared towards the analysis you need for your business, these three core data problems will be a thing of the past.

If you want more information on creating a data lake for your business, download our DataOps Foundation datasheet to learn about our 4-8 week engagement that helps you build a flexible, scalable data lake for centralizing, exploring and reporting on your data.

-Rob Whelan, Practice Manager, Data Engineering & Analytics

 

 

Facebooktwitterlinkedinmailrss

Cloud for Advanced Users – The 5 Most Important Lessons Learned Over a Decade

Being involved in cloud services and working closely with cloud providers over the past 10 years has given us a great deal of insight into the triumphs and pitfalls of cloud consumers. We’ve distilled that vast experience and come up with our list of the 5 most important lessons we’ve learned over the past decade for users that are experienced in the cloud with multiple applications/workloads running.

  1. Governance – Tagging, Tools, and Automation

Many of our customers have hundreds, if not thousands of accounts, and we’ve helped them solve many of their governance challenges. One challenge is ensuring they’re not doing certain things – for example, shadow IT and functioning in siloes. In the cloud, you want everyone to have visibility into best practices and understanding the critical role cloud plays in creating business value.

There are numerous tools and automation methods you can leverage to ensure your governance is in step with the latest innovation. First and foremost, a strong tagging strategy is critical. As with shadow IT, if you don’t tag things correctly, your teams can spin up resources with limited visibility on who owns them, continuously running and accumulating expenses over time. If you don’t start with a tagging strategy from day one, retroactively correcting is a herculean task. Starting with a strong architectural foundation and making sure that foundation stays in place with the proper tools will ensure governance doesn’t become a burden.

Putting the proper guardrails in place for this, such as AWS Config, can help overcome this challenge and make sure everybody’s following the rules. Sometimes governance and moving fast can seem like adversaries, but automation can help satisfy both.

  1. Optimization – It’s not a one-time exercise

Cloud users tend to think of optimization in terms of Reserved Instances (RI), but it reaches far beyond just RIs. Well-defined policies must exist to exhibit control over spend and discipline to go along with policies.

There are many ways to leverage cloud native solutions and products to achieve optimization as well as new classes of service. One key point is leveraging the right resources where appropriate. As new services come out and skills increase within organizations, the opportunity to not only optimize spend but optimize the applications themselves by leveraging more cloud native services will continue to drive down operating cost.

Optimization is not a one-time exercise, either. It’s an ongoing practice that needs to be done on a regular basis. Like cleaning out the garage, you need to maintain it. Who’s responsible for this? Often, it’s your company’s Cloud Center of Excellence, or a partner like 2nd Watch.

  1. Cloud Center of Excellence – Be bold and challenge the norm

We encourage all organizations to form a Cloud Center of Excellence (CCoE). Typically lead by an executive, your CCoE should be a multi-stakeholder organization that includes representatives from all areas of the business. With the multi-skilled group, you benefit from subject matter experts across a wide variety of areas within your organization who collectively become subject matter experts in cloud services and solutions. When you break down siloes, you’re able to move rapidly.

Your CCoE should be formed at the beginning of your migration and continue to revisit new capabilities released in the cloud on an ongoing basis, updating the organization’s standards to ensure enforcement.

One of the CCoE’s biggest roles is evangelizing within the organization to ensure people are embracing the cloud and celebrating successes, whether it comes from implementing DevOps with cloud native tools or optimizing and cloud refactoring. The CCoE’s motto should is, ‘Be bold, challenge the norm, look for new ways of doing things, and celebrate BIG.’

  1. Multi-Cloud – Get out of your comfort zone

As an advanced user, you have grown up with AWS and have a solid understanding and background of AWS. You’ve learned all the acronyms for AWS and understand the products and services. But now you’re being asked to integrate another CSP provider you might not be as familiar with. How do you take that basic cloud knowledge and transition to Azure or GCP?

There’s a little bit of a learning curve, so we recommend taking a training course. Some even offer training based upon your knowledge of AWS. For example, GCP offers training for AWS professionals. Training can help you acclimate to the nomenclature and technology differences between CSPs.

We typically see customers go deep with one cloud provider, and that tends to be where most workloads reside. This can be for financial reasons or due to skills and experience. You get a greater discount when you push more things into one CSP. However, some solutions fit better in one CSP over the other. To maximize your cloud strategy, you need to break down walls, get out of your comfort zone, and pursue the best avenue for the business.

  1. Talent – Continuously sharpen the knife’s edge

Talent is in high demand, so it can be challenging to attract the top talent. One way to overcome this is to develop talent internally. All cloud providers offer certifications, and incentivizing employees to go out there and get those certifications goes a long way. With that, success breeds success. Celebrate and evangelize early wins!

The cloud changes fast, so you need to continuously retrain and relearn. And as a bonus – those individuals that are involved in the CCoE have the unique opportunity to learn and grow outside of their area of expertise, so proactively volunteer to be a part of that group.

If you want more detailed information in any of these five areas, we have a wealth of customer examples we’d love to jump into with you. Contact us to start the conversation.

-Ian Willoughby, Chief Architect and Skip Barry, Executive Cloud Enablement Director

Facebooktwitterlinkedinmailrss