A Short Guide to Understanding Looker Pricing and Capabilities

Navigating the current BI and analytics landscape is often an overwhelming exercise. With buzzwords galore and price points all over the map, finding the right tool for your organization is a common challenge for CIOs and decision-makers. Given the pressure to become a data-driven company, the way business users analyze and interact with their data has lasting effects throughout the organization.

Looker, a recent addition to the Gartner Magic Quadrant, has a pricing model that differs from the per-user or per-server approach. Looker does not advertise their pricing model; instead, they provide a “custom-tailored” model based on a number of factors, including total users, types of users (viewer vs. editor), database connections, and scale of deployment.

Those who have been through the first enterprise BI wave (with tools such as Business Objects and Cognos) will be familiar with this approach, but others who have become accustomed to the SaaS software pricing model of “per user per month” may see an estimate higher than expected – especially when comparing to Power BI at $10/user per month. In this article, we’ll walk you through the reasons why Looker’s pricing is competitive in the market and what it offers that other tools do not.

Semantic and Governance Model

Unlike some of its competitors, Looker is not solely a reporting and dashboarding tool – it also acts as a data catalog across the enterprise. Looker requires users to think about their data and how they want their data defined across the enterprise.

Before you can start developing dashboards and visualizations, your organization must first define a semantic model (an abstraction of the database layer into business-friendly terms) using Looker’s native LookML scripting, which will then translate the business definitions into SQL. Centralizing the definitions of business metrics and models guarantees a single source of truth across departments. This will avoid a scenario where the finance department defines a metric differently than the sales or marketing teams, all while using the same underlying data. A common business model also eliminates the need for users to understand the relationships of tables and columns in the database, allowing for true self-service capabilities.

While it requires more upfront work, you will save yourself future headaches of debating why two different reports have different values or need to define the same business definitions in every dashboard you create.

By putting data governance front and center, your data team can make it easy for business users to create insightful dashboards in a few simple clicks.

Customization and Extensibility

At some point in the lifecycle of your analytics environment, there’s a high likelihood you will need to make some tweaks. Looker, for example, allows you to view and modify the SQL that is generated behind each visualization. While this may sound like a simple feature, a common pain point across analytics teams is trying to validate and tie out aggregations between a dashboard and the underlying database. Access to the underlying SQL not only lets analysts quickly debug a problem but also allows developers to tweak the auto-generated SQL to improve performance and deliver a better experience.

Another common complaint from users is the speed for IT to integrate data into the data warehouse. In the “old world” of Cognos and Business Objects, if your calculations were not defined in the framework model or universe, you would be unable to proceed without IT intervention. In the “new world” of Tableau, the dashboard and visualization are prioritized over the model. Looker brings the two approaches together with derived tables.

If your data warehouse doesn’t directly support a question you need to immediately answer, you can use Looker’s derived tables feature to create your own derived calculations. Derived tables allow you to create new tables that don’t already exist in your database. While it is not recommended to rely on derived tables for long-term analysis, it allows Looker users to immediately get speed-to-insight in parallel with the data development team incorporating it into the enterprise data integration plan.

Collaboration

Looker takes collaboration to a new level as every analyst gets their own sandbox. While this might sound like a recipe for disaster with “too many cooks in the kitchen,” Looker’s centrally defined, version-controlled business logic lives in the software for everyone to use, ensuring consistency across departments. Dashboards can easily be shared with colleagues by simply sending a URL or exporting directly to Google Drive, Dropbox, and S3. You can also send reports as PDFs and even schedule email delivery of dashboards, visualizations, or their underlying raw data in a flat file.

Embedded Analytics

Looker enables collaboration outside of your internal team. Suppliers, partners, and customers can get value out of your data thanks to the modern approach to embedded analytics. Looker makes it easy to embed dashboards, visuals, and interactive analytics to any webpage or portal because it works with your own data warehouse. You don’t have to create a new pipeline or pay for the cost of storing duplicate data in order to take advantage of embedded analytics.

So, is Looker worth the price?

Looker puts data governance front and center, which in itself is a decision your organization needs to make (govern first vs. build first). The addition of a centralized way to govern and manage your models is something that is often included as an additional cost in other tools, increasing the total investment when looking at competitors. If data governance and a centralized source of the truth is a critical feature of your analytics deployment, then the ability to manage this and avoid headaches of multiple versions of the truth makes Looker worth the cost.

If you’re interested in learning more or would like to see Looker in action, 2nd Watch has a full team of data consultants with experience and certifications in a number of BI platforms as well as a thorough understanding of how these tools can fit your unique needs. Get started with our data visualization starter pack.

 


Google Cloud, Open-Source and Enterprise Solutions

In 2020, a year where enterprises had to rethink their business models to stay alive, Google Cloud was able to grow 47% and capture market share. If you are not already looking at Google Cloud as part of your cloud strategy, you probably should.

Google has made conscious choices about not locking in customers with proprietary technology. Open-source technology has, for many years, been a core focus for Google, and many of Google Cloud’s solutions can integrate easily with other cloud providers.

Kubernetes (GKE), Knative (Cloud Functions), TensorFlow (Machine Learning), and Apache Beam (Data Pipelines) are some examples of cloud-agnostic tools that Google has open-sourced and which can be deployed to other clouds as well as on-premises, if you ever have a reason to do so.

Specifically, some of Google Cloud’s services and its go-to-market strategy set Google Cloud apart. Modern and scalable solutions like BigQuery, Looker, and Anthos fall into this category. They are best of class tools for each of their use cases, and if you are serious about your digital transformation efforts, you should evaluate their capabilities and understand what they can do for your business.

Three critical challenges we see from our enterprise clients here at 2nd Watch repeatedly include:

  1. How to get started with public cloud
  2. How to better leverage their data
  3. How to take advantage of multiple clouds

Let’s dive into each of these.

Foundation

Ask any architect if they would build a house without a foundation, and they would undisputedly tell you “No.” Unfortunately, many companies new to the cloud do precisely that. The most crucial step in preparing an enterprise to adopt a new cloud platform is to set up the foundation.

Future standards are dictated in the foundation, so building it incorrectly will cause unnecessary pain and suffering to your valuable engineering resources. The proper foundation, that includes your project structure aligned with your project lifecycle and environments, and a CI/CD pipeline to push infrastructure changes through code will enable your teams to become more agile while managing infrastructure in a modern way.

A foundation’s essential blocks include project structure, network segmentation, security, IAM, and logging. Google has a multi-cloud tool called Cloud Operations for logs management, reporting, and alerting, or you can ingest logs into existing tools or set up the brand of firewalls you’re most familiar and comfortable with from the Google Cloud Marketplace. Depending on your existing tools and industry regulations, compliance best practices might vary slightly, guiding you in one direction or another.

DataOps

Google has, since its inception, been an analytics powerhouse. The amount of data moving through Google’s global fiber network at any given time is incredible. Why does this matter to you? Google has now made some of its internal tools that manage large amounts of data available to you, enabling you to better leverage your data. BigQuery is one of these tools.

Being serverless, you can get started with BigQuery on a budget, and it can scale to petabytes of data without breaking a sweat. If you have managed data warehouses, you know that scaling them and keeping them performant is a task that is not easy. With BigQuery, it is.

Another valuable tool, Looker, makes visualizing your data easy. It enables departments to share a single source of truth, which breaks down data silos and enables collaboration between departments with dashboards and views for data science and business analysis.

Hybrid Cloud Solutions

Google Cloud offers several services for multi-cloud capabilities, but let’s focus on Anthos here. Anthos provides a way to run Kubernetes clusters on Google Cloud, AWS, Azure, on-premises, or even on the edge while maintaining a single pane of glass for deploying and managing your containerized applications.

With Anthos, you can deploy applications virtually anywhere and serve your users from the cloud datacenter nearest them, across all providers, or run apps at the edge – like at local franchise restaurants or oil drilling rigs – all with the familiar interfaces and APIs your development and operations teams know and love from Kubernetes.

Currently in preview, soon Google Cloud will release BigQuery Omni to the public. BigQuery Omni lets you extend the capabilities of BigQuery to the other major cloud providers. Behind the scenes, BigQuery Omni runs on top of Anthos and Google takes care of scaling and running the clusters, so you only have to worry about writing queries and analyzing data, regardless of where your data lives. For some enterprises that have already adopted BigQuery, this can mean a ton of cost savings in data transfer charges between clouds as your queries run where your data lives.

Google Cloud offers some unmatched open-source technology and solutions for enterprises you can leverage to gain competitive advantages. 2nd Watch has helped organizations overcome business challenges and meet objectives with similar technology, implementations, and strategies on all major cloud providers, and we would be happy to assist you in getting to the next level on Google Cloud.

2nd Watch is here to serve as your trusted cloud data and analytics advisor. When you’re ready to take the next step with your data, contact Us.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Aleksander Hansson, 2nd Watch Google Cloud Specialist


5 Cloud Optimization Benefits

When making a cloud migration, a common term that gets tossed around is “cloud optimization”. If your organization is new to the cloud, optimizing your environment is essential to ensuring your migration pays off quickly and continues to do so in the long term.

If your organization is already established in the cloud, you may observe higher costs than expected due to cloud sprawl, under-utilized resources, and improper allocation of resources. Cloud optimization helps your organization reduce these costs and improve overall efficiency in the cloud

What is cloud optimization?

The definition of cloud optimization may vary from one cloud service provider to another, but generally, cloud optimization is the process of analyzing, configuring, provisioning, and right-sizing cloud resources to maximize performance and minimize waste for cost efficiency. The reality is that many organizations’ cloud environments are configured in an inefficient manner that creates unnecessary cloud spend. With proper cloud optimization tools and practices, these unnecessary costs can be eliminated.

While cloud optimization is mostly discussed in terms of cloud spend, cost optimization is simply a faucet of cloud optimization and can extend to overall performance and organizational efficiency. Some examples of cloud optimization practices that your organization can adopt right now include:

  • Right-sizing: Matching your cloud computing instance types (i.e. containers and VMs) and sizes with enough resources to sufficiently meet your workload performance and capacity needs to ensure the lowest cost possible.
  • Family Refresh: Replace outdated systems with updated ones to maximize performance.
  • Autoscaling: Scale your resources according to your application demand so you are only paying for what you use.
  • Applying Discounts: Reserved instances (RIs) allow companies to commit to cloud resources for a long period of time. The longer the discount and the more a company is prepared to pre-pay at the beginning of a period, the greater the discount will be. Discounted pricing models like RIs and spot instances will drive down your cloud costs when used according to your workload.
  • Identity use of RIs: Identifying the use of RIs can be an effective way to save money in the cloud if used for suitable loads.
  • Eliminate Waste: Regulating unused resources is a core component of cloud optimization. If you haven’t already considered cloud optimization practices, you are most likely using more resources than necessary or not certain resources to their full capacity.

Why is cloud optimization important?

Overspending in the cloud is a common issue many organizations face by allocating more resources to a workload than necessary. Integrating cloud optimization practices can reap many benefits for your cloud infrastructure and your organization, including the following:

  • Cloud Efficiency: When workload performance, compliance, and cost are continually balanced against the best-fit infrastructure in real-time, efficiency is achieved. Implementing cloud optimization practices will eliminate as much cloud resource waste as possible, increasing the performance of your cloud environment.
  • Cost Savings: Although cloud optimization comes in a variety of forms, cost optimization is the most important component for many organizations. By reducing waste in the cloud, costs are reduced as a byproduct.
  • Greater Visibility: Cloud optimization practices utilize analytics to provide visibility into your cloud environment to make data-driven decisions. Implementing optimization tools also provides cost visibility, so your organization has a better perspective on cloud spend.
  • Increased Productivity: Once a cloud optimization strategy is implemented, IT teams will spend less time trying to solve problems because an optimized environment prevents problems before they occur.
  • Organizational Innovation & Efficiency: Implementing cloud optimization often is accompanied by a cultural shift within organizations such as improved decision-making and collaboration across teams.

What are cloud optimization services?

Public cloud services providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have over 500,000 distinct prices and technical combinations that can overwhelm the most experienced IT organizations and business units. Luckily, there are already services that can help your organization achieve the cloud optimization it needs to drive business outcomes. Cloud optimization services help your organization identify areas of improvement in your cloud for cost savings and efficiency, create an optimization strategy for your organization, and can manage your cloud infrastructure for continuous optimization.

At 2nd Watch, we take a holistic approach to cloud optimization. We have developed various optimization pillars based on real-time data to ensure your cloud environments are running as efficiently as possible. Behind our solutions for cloud optimization is a team of experienced data scientists and architects that help you maximize the performance and returns of your cloud assets. Our services offerings for cloud optimization at 2nd Watch include:

  • Strategy & Planning: Define your optimization strategy with our proven methodology, tailored to meet your desired business outcomes and maximize your results.
  • Cost Optimization Assessment: Gain the visibility necessary to make data-driven decisions. Identify opportunities across our Pillars of Optimization to maximize cost savings and cloud environment efficiency.
  • Spot Instance & Container Optimization: Save up to 90% compared to traditional cloud infrastructure by running both Instances/VMs and Containers on spot resources for relevant workloads.
  • Multi-Cloud Optimization: Cloud optimization on a single public cloud is one challenge but optimizing a hybrid cloud is a whole other challenge. Apply learning from your assessment to optimize your cloud environment for AWS, Microsoft Azure, Google Cloud, and VMware on AWS.
  • Forecasting, Modeling, & Analytics: Understand your past usage, and model and forecast your future needs with the analytical data needed for visibility across your organization.

Our cloud optimization process starts with data, and you have a lot of it. But data alone can lead you astray yielding wasted resources and overspend. There are many other factors to evaluate, such as EDP/EA agreements and Savings Plans/RI Purchases, to ensure you choose the most cost-effective option for your business. Strategically, our data scientists and architects map connections between data and workloads. We then make correlations between how workloads interact with each resource and the optimal financial mechanism to reach your cloud optimization goals.

Cloud Optimization with 2nd Watch

Working with a managed cloud service provider like 2nd Watch will give your organization the expertise needed for cloud optimization. If you want to learn more about cost savings or are interested in fully optimizing your cloud infrastructure, contact us to take your next steps.

 


Datacenter Migration to the Cloud: Why Your Business Should Do it and How to Plan for it

Datacenter migration is ideal for businesses who are looking to exit or reduce on-premises datacenters, migrate workloads as is, modernize apps, or leave another cloud. Executing migrations, however, is no small task, and as a result, there are many enterprise workloads that still run in on-premises datacenters. Often technology leaders want to migrate more of their workloads and infrastructure to a private or public cloud, but they are turned off by the seemingly complex processes and strategies involved in cloud migration or lack the internal cloud skills necessary to make the transition.

 

Though datacenter migration can be a daunting business initiative, the benefits of moving to the cloud are well worth the effort, and the challenges of the migration process can be mitigated by creating a strategy, using the correct tools, and utilizing professional services. Datacenter migration provides a great opportunity to revise, rethink, and improve an organization’s IT architecture. It also ultimately impacts business-critical drivers such as reducing capital expenditure, decreasing ongoing cost, improving scalability and elasticity, improving time-to-market, enacting digital transformation, and attaining improvements in security and compliance.

What are Common Datacenter Migration Challenges?

To ensure a seamless and successful migration to the cloud, businesses should be aware of the potential complexities and risks associated with a datacenter migration. The complexities and risks are addressable, and if addressed properly, organizations can create not only an optimal environment for their migration project, but provide the launch point for business transformation.

Not Understanding Workloads

While cloud platforms are touted as flexible, it is a service-oriented resource and should be treated as such. To be successful in cloud deployment, organizations need to be aware of performance, compatibility, performance requirements (including hardware, software, and IOPS), required software, and adaptability to changes in their workloads. Teams need to run their cloud workloads on the cloud service that is best aligned with the needs of the application and the business.

Not Understanding Licensing

Cloud marketplaces allow businesses to easily “rent” software at an hourly rate. Though the ease of this purchase is enticing, it’s important to remember that it’s not the only option out there. Not all large vendors offer licensing mobility for all applications outside the operating system. In fact, companies should leverage existing relationships with licensing brokers. Just because a business is migrating to the cloud doesn’t mean that a business should abandon existing licensing channels. Organizations should familiarize themselves with their choices for licensing to better maximize ROI.

Not Looking for Opportunities to Incorporate PaaS

Platform as a service (PaaS) is a cloud computing model where a cloud service provider delivers hardware and software tools to users over the internet versus a build-it-yourself Infrastructure as a Service (IaaS) model. The PaaS provider abstracts everything—servers, networks, storage, operating system software, databases, development tools—enabling teams to focus on their application. This enables PaaS customers to build, test, deploy, run, update and scale applications more quickly and inexpensively than they could if they had to build out and manage an IaaS environment on top of their application. While businesses shouldn’t feel compelled to rewrite all their network configurations and operating environments, they should see where they can have quick PaaS wins to replace aging systems.

Not Proactively Preparing for Cloud Migration

Building a new datacenter is a major IT event and usually goes hand-in-hand with another significant business event, such as an acquisition, or outgrowing the existing datacenter. In the case of moving to a new on-premises datacenter, the business will slow down as the company takes on a physical move. Migrating to the cloud is usually not coupled with an eventful business change, and as a result, business does not stop when a company chooses to migrate to the cloud. Therefore, a critical part of cloud migration success is designing the whole process as something that can run along with other IT changes that occur on the same timeline. Application teams frequently adopt cloud deployment practices months before their systems actually migrate to the cloud. By doing so, the team is ready before their infrastructure is even prepared, which makes cloud migration a much smoother event. Combining cloud events with other changes in this manner will maximize a company’s ability to succeed.

Treating and Running the Cloud Environment Like Traditional Datacenters

It seems obvious that cloud environments should be treated differently from traditional datacenters, but this is actually a common pitfall for organizations to fall in. For example, preparing to migrate to the cloud should not include traditional datacenter services, like air conditioning, power supply, physical security, and other datacenter infrastructure, as a part of the planning. Again, this may seem very obvious, but if a business is used to certain practices, it can be surprisingly difficult to break entrenched mindsets and processes.

How to Plan for a Datacenter Migration

While there are potential challenges associated with datacenter migration, the benefits of moving from physical infrastructures, enterprise datacenters, and/or on-premises data storage systems to a cloud datacenter or a hybrid cloud system is well worth the effort.

Now that we’ve gone over the potential challenges of datacenter migration, how do businesses enable a successful datacenter migration while effectively managing risk?

Below, we’ve laid out a repeatable high-level migration strategy that is broken down into four phases: Discovery, Planning, Execution, and Optimization. By leveraging a repeatable framework as such, organizations create the opportunity to identify assets, minimize migration costs and risks using a multi-phased migration approach, enable deployment and configuration, and finally, optimize the end state.

Phase 1: Discovery

During the Discovery phase, companies should understand and document the entire datacenter footprint. This means understanding the existing hardware mapping, software applications, storage layers (databases, file shares), operating systems, networking configurations, security requirements, models of operation (release cadence, how to deploy, escalation management, system maintenance, patching, virtualization, etc.), licensing and compliance requirements, as well as other relevant assets.

The objective of this phase is to have a detailed view of all relevant assets and resources of the current datacenter footprint.

The key milestones in the Discovery phase are:

  • Creating a shared datacenter inventory footprint: Every team and individual who is a part of the datacenter migration to the cloud should be aware of the assets and resources that will go live.
  • Sketching out an initial cloud platform foundations design: This involves identifying centralized concepts of the cloud platform organization such as folder structure, Identity and Access Management (IAM)  model, network administration model, and more.

As a best practice, companies should engage in cross-functional dialogue within their organizations, including teams from IT to Finance to Program Management, ensuring everyone is aligned on changes to support future cloud processes. Furthermore, once a business has migrated from a physical datacenter to the cloud, they should consider whether their datacenter team is trained to support the systems and infrastructure of the cloud provider.

Phase 2: Planning

When a company is entering the Planning phase, they are leveraging the assets and deliverables gathered in the Discovery phase to create migration waves to be sequentially deployed into non-production and production environments.

Typically, it is best to target non-production migration waves first, which helps identify the sequence of waves to migrate first. To start, consider the following:

  • Mapping the current server inventory to the cloud platform’s machine types: Each current workload will generally run on a virtual machine type with similar computing power, memory, and disk. Oftentimes though, the current workload is overprovisioned, so each workload should be evaluated to ensure that it is migrated onto the right VM for that given workload.
  • Timelines: Businesses should lay out their target dates for each migration project.
  • Workloads in each grouping: Figure out what migration waves are grouped by i.e. non-production vs. production applications.
  • The cadence of code releases: Factor in any upcoming code releases as this may impact the decision of whether to migrate sooner or later.
  • Time for infrastructure deployment and testing: Allocate adequate time for testing infrastructures before fully moving over to the cloud.
  • The number of application dependencies: Migration order should be influenced by the number of application dependencies. The applications with the fewest dependencies are generally good candidates for migration first. In contrast, wait to migrate an application that depends on multiple databases.
  • Migration complexity and risk: Migration order should also take complexity into consideration. Tackling simpler aspects of the migration first will generally yield a more successful migration.

As mentioned above, the best practice for migration waves is to start with more predictable and simple workloads. For instance, companies should start with migrating file shares first, then databases and domain controlled, and save the apps for last. However, sometimes the complexity and dependencies don’t allow for a straightforward migration. In these cases, utilizing an experienced service provider who has experience with these complex environments will be prudent.

Phase 3: Execution

Once companies have developed a plan, they can bring them to fruition in the Execution phase. Here, businesses will need to be deliberate about the steps they take and the configurations they develop.

In the Execution phase, companies will put into place infrastructure components and ensure they are configured appropriately, like IAM, networking, firewall rules, and Service Accounts. Here is also where teams should test the applications on the infrastructure configurations to ensure that they have access to their databases, file shares, web servers, load balancers, Active Directory servers, and more. Execution also includes using logging and monitoring to ensure applications continue to function with the necessary performance.

In order for the Execution phase to be successful, there needs to be agile application debugging and testing. Moreover, organizations should have both a short and long-term plan for resolving blockers that may come up during the migration. The Execution phase is iterative and the goal should be to ensure that applications are fully tested on the new infrastructure.

Phase 4: Optimization

The last phase of a datacenter migration project is Optimization. After a business has migrated its workloads to the cloud, it should conduct periodic reviews and planning to optimize the workloads. Optimization includes the following activities:

  • Resizing machine types and disks
  • Leveraging software like Terraform for more agile and predictable deployments
  • Improving automation to reduce operational overhead
  • Bolstering integration with logging, monitoring, and alerting tools
  • Adopting managed services to reduce operational overhead

Cloud services provide visibility into resource consumption and spending, and organizations can more easily identify the compute resources they are paying for. Additionally, businesses can identify virtual machines they need or don’t need. By migrating from a traditional datacenter environment to a cloud environment, teams will be able to optimize their workloads due to the powerful tools that cloud platforms provide.

How do I take the first step in datacenter migration?

While undertaking a full datacenter migration is a significant project, it is worthwhile. The migration framework we’ve provided can help any business break down the process into manageable stages and move fully to the cloud.

When you’re ready to take the first step, we’re here to help to make the process even easier. Contact a 2nd Watch advisor today to get started with your migration to the cloud.

 


Cloud Crunch Podcast: 5 Strategies to Maximize Your Cloud’s Value – Create Competitive Advantage from your Data

AWS Data Expert, Saunak Chandra, joins today’s episode to break down the first of five strategies used to maximize your cloud’s value – creating competitive advantage from your data. We look at tactics including Amazon Redshift, RA3 node type, best practices for performance, data warehouses, and varying data structures. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.


Cloud Crunch Podcast: You’re on the Cloud. Now What? 5 Strategies to Maximize Your Cloud’s Value

You migrated your applications to the cloud for a reason. Now that you’re there, what’s next? How do you take advantage of your applications and data that reside in the cloud? What should you be thinking about in terms of security and compliance? In this first episode of a 5-part series, we discuss 5 strategies you should consider to maximize the value of being on the cloud. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.


Cloud Crunch Podcast: Moving to the Cloud for the Right Reasons

When you’re considering moving to the cloud, it’s important to take a personal examination of your goals for migrating, outside of the basic benefits achievable with the cloud. To maximize the value of the cloud, you have to make sure you’re moving for the right reasons. Today we discuss just that with our very own 2nd Watch CEO, Doug Schneider. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.


Cloud Governance: Why It Is Critical to the Success of Cloud Adoption

According to a 2019 report by Unisys, 37% of all cloud adoption initiatives fail to realize their objectives.

The report, although disturbing, is not shocking by any measure. Although businesses continue to migrate to the cloud, many have failed to make it a core part of their business strategy. The reasons for this vary – poorly trained staff, inability to utilize cloud resources effectively, or the absence of a strategy that leverages the power of cloud.

For these reasons and many others, businesses incur unexpected costs, unproductive workflows, and cybersecurity risks to their data on the cloud. These organizations need a set of protocols for utilizing cloud resources efficiently, effectively, and securely. In short, they need a cloud governance framework that enables them to extract the benefits of the cloud.

Organizations can fully realize these benefits only when their cloud policies are designed to leverage them. Therefore, a well-designed cloud governance framework is critical to the success of cloud adoption. What is cloud governance and how does it lay the foundation for the success of your cloud adoption?

Download our white paper to learn about the role of cloud governance in successful cloud adoption.

-Mir Ali, Field CTO


Cloud Crunch Podcast: Multi-Cloud – Is it Really Worth It?

The promise of multi-cloud suggests enterprises should be able to run their applications and workloads in whichever cloud environment makes the most sense from a cost, performance or functionality perspective. But the reality of the situation can be very different in practice, as enterprises grapple with how best to make technologies created by competing suppliers play nicely together. Contributor and Analyst to Forbes, CBS interactive, Information Today, Inc., and RTInsights, Joseph McKendrick, joins today’s episode to give his perspective on the value of a multi-cloud strategy. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.


Cloud Center of Excellence: 3 Foundational Areas with 4 Phases of Maturity

A cloud center of excellence (CCoE) is essential for successful, efficient, and effective cloud implementation across your organization. Although the strategies look different for each business, there are three areas of focus, and four phases of maturity within those areas, that are important markers for any CCoE.

1. Financial Management

As you move to the public cloud and begin accessing the innovation and agility offered, it comes with the potential for budget overruns. Without proper planning and inclusion of financial leaders, you may find you’re not only paying for datacenters, but you’re also racking up large, and growing, public cloud bills. Financial management needs to be centrally governed, but extremely deliberate because it touches hundreds of thousands of places across your organization.

You may think involving finance will be painful but brining all stakeholders to the table equally has proven highly effective. Over the last five years, there’s been a revolution in how finance can effectively engage in cloud and infrastructure management. This emerging model, guided by the CCoE, enables organizations to justify leveraging the cloud, not only based on agility and innovation, but also cost. Increasingly, organizations are achieving both better economics and gaining the ability to do things in the cloud that cannot be done inside datacenters.

2. Operations

To harness the power and scale possible in the cloud, you need to put standards and best practices in place. These often start around configuration – tagging policies, reference architectures, workloads, virtual machines, storage, and performance characteristics. Standardization is a prerequisite to repeatability and is the driving force behind gaining the best ROI from the cloud.

Today, we’re actually seeing that traditional application of the cloud does not yield the best economic benefits available. For decades, we accepted an architectural model where the operating system was central to the way we built, deployed, and managed applications. However, when you look beyond the operating system, whether it’s containers or the rich array of platform services available, you start to see new opportunities that aren’t available inside datacenters.

When you’re not consuming the capital expenditure for the infrastructure you have available to you, and you’re only consuming it when you need it, you can really start to unlock the power of the cloud. There are many more workloads available to take advantage of as well. The more you start to build cloud native, or cloud centric architecture, the more potential you have to maximize financial benefits.

3. Cloud Security and Compliance

Cloud speed is fast. Much faster than what’s possible in datacenters. Avoid a potentially fatal breach,  data disruption, or noncompliance penalty with strict security and compliance practices. You should be confident in the tools you implement throughout your organization, especially where the cloud is being managed day to day and changes are being driven. With each change and new instance, make sure you’re following the CCoE recommendations with respect to industry, state, and federal compliance regulations.

4. Phase Cloud Maturity Model

CloudHealth put forward a cloud maturity model based on patterns observed in over 10,000 customer interactions in the cloud. Like a traditional maturity model, the bottom left represents immaturity in the cloud, and the upper right signifies high maturity. Within each of the three foundational areas – financial management, operations, and security and compliance – an organization needs to scale and mature through the following four phases.

Phase 1: Visibility

Maturity starts at the most basic level by gaining visibility into your current architecture. Visibility gives you the connective tissue necessary to make smart decisions – although it doesn’t actually make those decisions obvious to you. First, know what you’re running, why you’re running it, and the cost. Then, analyze how it aligns with your organization from a business perspective.

Phase 2: Optimization

The goal here is all around optimization within each of the three areas. In regards to financial management and operations, you need to size a workload appropriately to support demand, but without going over capacity. In the case of security, optimization is proactively monitoring all of the hundreds of thousands of changes that occur across the organization each day. The strategy and tools you use to optimize must be in accordance with the best practices in your standards and policies.

Phase 3: Governance and Automation

In this phase you’re moving away from just pushing out dashboards, notification alerts, or reports to stakeholders. Now, it’s about strategically monitoring for the ideal state of workloads and applications in your business services. How do you automate the outcomes you want? The goal is to keep it in the optimum state all the time, or nearly all the time, without manual tasks and the risks of human error.

Phase 4: Business Integration

This is the ultimate state where the cloud gets integrated with your enterprise dashboards and service catalogue, and everything is connected across the organization. You’re no longer focused on the destination of the cloud. Instead, the cloud is just part of how you transact business.

As you move through each phase, establish measurements of cloud maturity using KPIs and simple metrics. Enlist the help of a partner like 2nd Watch that can provide expertise, automation, and software so you can achieve better business outcomes regardless of your cloud goals. Contact Us to understand how our cloud optimization services are maximizing returns.

-Chris Garvey, EVP of Product