1-888-317-7920 info@2ndwatch.com

Cloud Crunch Podcast: Managed DevOps – An Oxymoron?

Managed DevOps sounds like an oxymoron, so why would someone consider it? Hear from our DevOps pro about the factors that contribute to companies stalling mid-DevOps Transformation and how a Managed DevOps strategy can help overcome that plateau. We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

Facebooktwitterlinkedinmailrss

3 Reasons to Consider a Hybrid Cloud Strategy

If there’s one thing IT professionals can agree on, it’s that hybrid cloud computing isn’t going away. Developed in response to our growing dependence on data, the hybrid cloud is being embraced by enterprises and providers alike.

What is hybrid cloud computing?

Hybrid cloud computing can be a combination of private cloud, like VMware, and public cloud; or it can be a combination of cloud providers, like AWS, Azure and Google Cloud. Hybrid cloud architecture might include a managed datacenter or a company’s own datacenter. It could also include both on-prem equipment and cloud applications.

Hybrid cloud computing gained popularity alongside the digital transformation we’ve witnessed taking place for years. As applications evolve and become more dev-centric, they can be stored in the cloud. At the same time, there are still legacy apps that can’t be lifted and shifted into the cloud and, therefore, have to remain in a datacenter.

Ten years ago, hybrid and private clouds were used to combat growth, but now we’re seeing widespread adoption from service providers to meet client needs. The strategy has range from on-prem up to the cloud (VMware Cloud (VMC) on AWS), to cloud-down (AWS Outposts), to robust deployment and management frameworks for any endpoint (GCP Anthos).

With that said, for many organizations data may never entirely move to the cloud. A company’s data is their ‘secret sauce,’ and despite the safety of the cloud, not everything lends itself to cloud storage. Depending on what exactly the data is –mainframes, proprietary information, formulas – some businesses don’t feel comfortable with service providers even having access to such business-critical information.

1. Storage

One major reason companies move to the cloud is the large amount of data they are now storing. Some companies might not be able to, or might not want to, build and expand their datacenter as quickly as the business and data requires.

With the option for unlimited storage the cloud provides, it is an easy solution. Rather than having to forecast data growth, prioritize storage, and risk additional costs, a hybrid strategy allows for expansion.

2. Security

The cloud is, in most cases, far more secure than on-prem. However, especially when the cloud first became available, a lot of companies were concerned about who could see their data, potential for leaks, and how to guarantee lockdown. Today, security tools have vastly improved, visibility is much better, and the compliance requirements for cloud providers include a growing number of local and federal authorities. Additionally, third party auditors are used to verify cloud provider practices as well as internal oversight to avoid a potentially fatal data breach. Today, organizations large and small, across industries, and even secret government agencies trust the cloud for secure data storage.

It’s also important to note that the public cloud can be more secure than your own datacenter. For example, if you try to isolate data in your own datacenter or on your own infrastructure, you might find a rogue operator creating shadow IT where you don’t have visibility. With hybrid cloud, you can take advantage of tools like AWS Control Tower, Azure Sentinel, AWS Landing Zone blueprints, and other CSP security tools to ensure control of the system. Similarly, with tooling from VMware and GCP Anthos you can look to create single policy and configuration for environment standardization and security across multiple clouds and on-prem in a single management plane.

3. Cost

Hybrid cloud computing is a great option when it comes to cost. On an application level, the cloud lets you scale up or down, and that versatility and flexibility can save costs. But if you’re running always-on, stagnant applications in a large environment, keeping them in a datacenter can be more cost effective. One can make a strong case for a mixture of applications being placed in the public cloud while internal IP apps remain in the datacenter.

You also need to consider the cost of your on-prem environment. There are some cases, depending on the type and format of storage necessary, where the raw cost of a cloud doesn’t deliver a return on investment (ROI). If your datacenter equipment is running near 80% or above utilization, the cost savings might be in your favor to continue running the workload there. Alternately, you should also consider burst capacity as well as your non-consistent workloads. If you don’t need something running 24/7, the cloud lets you turn it off at night to deliver savings.

Bonus Reason – Consistency of management tooling and staff skills

The smartest way to move forward with your cloud architecture – hybrid or otherwise – is to consult with cloud computing experts. 2nd Watch helps you choose the most efficient strategy for your business, aids in planning and completing migration in an optimized fashion, and secures your data with comprehensive cloud management. Contact Us to take the next step in your cloud journey.

-Dusty Simoni, Sr Product Manager, Hybrid Cloud

Facebooktwitterlinkedinmailrss

3 Productivity-Killing Data Problems and How to Solve Them

With the typical enterprise using over 1,000 Software as a Service applications (source: Kleiner Perkins), each with its own private database, it’s no wonder people complain their data is siloed. Picture a thousand little silos, all locked up!

Number of cloud applications used per enterprise, by industry vertical

Then, imagine you start building a dashboard out of all those data silos. You’re squinting at it and wondering, can I trust this dashboard? You placate yourself because at least you have data to look at, but this creates more questions for which data doesn’t yet exist.

If you’re in a competitive industry, and we all are, you need to take your data analysis to the next level. You’re either gaining competitive advantage over your competition or being left behind.

As a business leader, you need data to support your decisions. These three data complexities are at the core of every leader’s difficulties with gaining business advantages from data:

  1. Siloed data
  2. Untrustworthy data
  3. No data

 

  1. Siloed data

Do you have trouble seeing your data at all? Are you mentally scanning your systems and realizing just how many different databases you have? A recent customer of ours was collecting reams of data from their industrial operations but couldn’t derive the data’s value due to the siloed nature of their datacenter database. The data couldn’t reach any dashboard in any meaningful way. It is a common problem. With enterprise data doubling every few years, it takes modern tools and strategies to keep up with it.

For our customer, we started with defining the business purpose of their industrial data – to predict demand in the coming months so they didn’t have a shortfall. That business purpose, which had team buy-in at multiple corporate levels, drove the entire engagement. It allowed us to keep the technology simple and focused on the outcome.

One month into the engagement, they had clean, trustworthy, valuable data in a dashboard. Their data was unlocked from the database and published.

Siloed data takes some elbow grease to access, but it becomes a lot easier if you have a goal in mind for the data. It cuts through noise and helps you make decisions more easily if you know where you are going.

  1. Untrustworthy data

Do you have trouble trusting your data? You have a dashboard, yet you’re pretty sure the data is wrong, or lots of it is missing. You can’t take action on it, because you hesitate to trust it. Data trustworthiness is a prerequisite for making your data action oriented. But, most data has problems – missing values, invalid dates, duplicate values, and meaningless entries. If you don’t trust the numbers, you’re better off without the data.

Data is there for you to take action on, so you should be able to trust it. One key strategy is to not bog down your team with maintaining systems, but rather use simple, maintainable, cloud-based systems that use modern tools to make your dashboard real.

  1. No data

Often you don’t even have the data you need to make a decision. “No data” comes in many forms:

  • You don’t track it. For example, you’re an ecommerce company that wants to understand how email campaigns can help your sales, but you don’t have a customer email list.
  • You track it but you can’t access it. For example, you start collecting emails from customers, but your email SaaS system doesn’t let you export your emails. Your data is so “siloed” that it effectively doesn’t exist for analysis.
  • You track it but need to do some calculations before you can use it. For example, you have a full customer email list, a list of product purchases, and you just need to join the two together. This is a great place to be and is where we see the vast majority of customers.

That means finding patterns and insights not just within datasets, but across datasets. This is only possible with a modern, cloud-native data lake.

The solution: define your business need and build a data lake

Step one for any data project – today, tomorrow and forever – is to define your business need.

Do you need to understand your customer better? Whether it is click behavior, email campaign engagement, order history, or customer service, your customer generates more data today than ever before that can give you clues as to what she cares about.

Do you need to understand your costs better? Most enterprises have hundreds of SaaS applications generating data from internal operations. Whether it is manufacturing, purchasing, supply chain, finance, engineering, or customer service, your organization is generating data at a rapid pace.

(AWS :What is a Data Lake?)

Don’t be overwhelmed. You can cut through the noise by defining your business case.

The second step in your data project is to take that business case and make it real in a cloud-native data lake. Yes, a data lake. I know the term has been abused over the years, but a data lake is very simple; it’s a way to centrally store all (all!) of your organization’s data, cheaply, in open source formats to make it easy to access from any direction.

Data lakes used to be expensive, difficult to manage, and bulky. Now, all major cloud providers (AWS, Azure, GCP) have established best practices to keep storage dirt-cheap and data accessible and very flexible to work with. But data lakes are still hard to implement and require specialized, focused knowledge of data architecture.

How does a data lake solve these three problems?

  1. Data lakes de-silo your data. Since the data stored in your data lake is all in the same spot, in open-source formats like JSON and CSV, there aren’t any technological walls to overcome. You can query everything in your data lake from a single SQL client. If you can’t, then that data is not in your data lake and you should bring it in.
  2. Data lakes give you visibility into data quality. Modern data lakes and expert consultants build in a variety of checks for data validation, completeness, lineage, and schema drift. These are all important concepts that together tell you if your data is valuable or garbage. These sorts of patterns work together nicely in a modern, cloud-native data lake.
  3. Data lakes welcome data from anywhere and allow for flexible analysis across your entire data catalog. If you can format your data into CSV, JSON, or XML, then you can put it in your data lake. This solves the problem of “no data.” It is very easy to create the relevant data, either by finding it in your organization, or engineering it by analyzing across your data sets. An example would be joining data from Sales (your CRM) and Customer Service (Zendesk) to find out which product category has the best or worst customer satisfaction scores.

The 2nd Watch Dataops Foundation Platform

You should only build a data lake if you have clear business outcomes in mind. Most cloud consulting partners will robotically build a bulky data lake without any thought to the business outcome. What sets 2nd Watch apart is our focus on your business needs. Do you need to make better decisions? Speed up a process? Reduce costs somewhere? We keep your goal front and center throughout the entire engagement. We’ve deployed data lakes dozens of times for enterprises with this unique focus in mind.

Our ready-to-deploy data lake captures years of cloud experience and best practices, with integration from governance to data exploration and storage. We explain the reasons behind the decisions and make changes based on your requirements, while ingesting data from multiple sources and exploring it as soon as possible. In the above image, the core of the data lake are the three zones represented by green S3 bucket squares.

Here is a tour of each zone:

  • Drop Zone: As the “single source of truth,” this is a copy of your data in its most raw format, always available to verify what the actual truth is. Place data here with minimal or no formatting. For example, you can take a daily “dump” of a relational database in CSV format.
  • Analytics Zone: To support general analytics, data in the Analytics Zone is compressed and reformatted for fast analytics. From here, you can use a single SQL Client, like Athena, to run SQL queries over your entire enterprise dataset — all from a single place. This is the core value add of your data lake.
  • Curated Zone: The “golden” or final, polished, most-valued datasets for your company go here. This is where you save and refresh data that will be used for dashboards or turned into visualizations.

Our Classic 3-zone data lake on S3 features immutable data by default. You’ll never lose data, nor do you have to configure a lot of settings to accomplish this. Using AWS Glue, data is automatically compressed and archived to minimize storage costs. Convenient search with always-up-to-date data catalog allows you to easily discover all your enterprise datasets.

In the Curated Zone, only the most important “data marts” – approved datasets – get loaded into more costly Redshift or RDS, minimizing costs and complexity. And with Amazon SageMaker, tapping into your Analytics and Curated Zone, you are prepared for effective machine learning. One of the most overlooked aspects of machine learning and advanced analytics is the great importance of clean, available data. Our data lake solves that issue.

If you’re struggling with one of these three core data issues, the solution is to start with a crisp definition of your business need, and then build a data lake to execute on that need. A data lake is just a central repository for flexible and cheap data storage. If you focus on keeping your data lake simple and geared towards the analysis you need for your business, these three core data problems will be a thing of the past.

If you want more information on creating a data lake for your business, download our DataOps Foundation datasheet to learn about our 4-8 week engagement that helps you build a flexible, scalable data lake for centralizing, exploring and reporting on your data.

-Rob Whelan, Practice Manager, Data Engineering & Analytics

 

 

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: 5 Strategic IT Business Drivers CXOs are Contemplating Now

What is the new normal for life and business after COVID-19, and how does that impact IT? We dive into the 5 strategic IT business drivers CXOs are contemplating now and the motivation behind those drivers. Read the corresponding blog article at https://www.2ndwatch.com/blog/five-strategic-business-drivers-cxos-contemplating-now/. We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

Facebooktwitterlinkedinmailrss

Cloud for Advanced Users – The 5 Most Important Lessons Learned Over a Decade

Being involved in cloud services and working closely with cloud providers over the past 10 years has given us a great deal of insight into the triumphs and pitfalls of cloud consumers. We’ve distilled that vast experience and come up with our list of the 5 most important lessons we’ve learned over the past decade for users that are experienced in the cloud with multiple applications/workloads running.

  1. Governance – Tagging, Tools, and Automation

Many of our customers have hundreds, if not thousands of accounts, and we’ve helped them solve many of their governance challenges. One challenge is ensuring they’re not doing certain things – for example, shadow IT and functioning in siloes. In the cloud, you want everyone to have visibility into best practices and understanding the critical role cloud plays in creating business value.

There are numerous tools and automation methods you can leverage to ensure your governance is in step with the latest innovation. First and foremost, a strong tagging strategy is critical. As with shadow IT, if you don’t tag things correctly, your teams can spin up resources with limited visibility on who owns them, continuously running and accumulating expenses over time. If you don’t start with a tagging strategy from day one, retroactively correcting is a herculean task. Starting with a strong architectural foundation and making sure that foundation stays in place with the proper tools will ensure governance doesn’t become a burden.

Putting the proper guardrails in place for this, such as AWS Config, can help overcome this challenge and make sure everybody’s following the rules. Sometimes governance and moving fast can seem like adversaries, but automation can help satisfy both.

  1. Optimization – It’s not a one-time exercise

Cloud users tend to think of optimization in terms of Reserved Instances (RI), but it reaches far beyond just RIs. Well-defined policies must exist to exhibit control over spend and discipline to go along with policies.

There are many ways to leverage cloud native solutions and products to achieve optimization as well as new classes of service. One key point is leveraging the right resources where appropriate. As new services come out and skills increase within organizations, the opportunity to not only optimize spend but optimize the applications themselves by leveraging more cloud native services will continue to drive down operating cost.

Optimization is not a one-time exercise, either. It’s an ongoing practice that needs to be done on a regular basis. Like cleaning out the garage, you need to maintain it. Who’s responsible for this? Often, it’s your company’s Cloud Center of Excellence, or a partner like 2nd Watch.

  1. Cloud Center of Excellence – Be bold and challenge the norm

We encourage all organizations to form a Cloud Center of Excellence (CCoE). Typically lead by an executive, your CCoE should be a multi-stakeholder organization that includes representatives from all areas of the business. With the multi-skilled group, you benefit from subject matter experts across a wide variety of areas within your organization who collectively become subject matter experts in cloud services and solutions. When you break down siloes, you’re able to move rapidly.

Your CCoE should be formed at the beginning of your migration and continue to revisit new capabilities released in the cloud on an ongoing basis, updating the organization’s standards to ensure enforcement.

One of the CCoE’s biggest roles is evangelizing within the organization to ensure people are embracing the cloud and celebrating successes, whether it comes from implementing DevOps with cloud native tools or optimizing and cloud refactoring. The CCoE’s motto should is, ‘Be bold, challenge the norm, look for new ways of doing things, and celebrate BIG.’

  1. Multi-Cloud – Get out of your comfort zone

As an advanced user, you have grown up with AWS and have a solid understanding and background of AWS. You’ve learned all the acronyms for AWS and understand the products and services. But now you’re being asked to integrate another CSP provider you might not be as familiar with. How do you take that basic cloud knowledge and transition to Azure or GCP?

There’s a little bit of a learning curve, so we recommend taking a training course. Some even offer training based upon your knowledge of AWS. For example, GCP offers training for AWS professionals. Training can help you acclimate to the nomenclature and technology differences between CSPs.

We typically see customers go deep with one cloud provider, and that tends to be where most workloads reside. This can be for financial reasons or due to skills and experience. You get a greater discount when you push more things into one CSP. However, some solutions fit better in one CSP over the other. To maximize your cloud strategy, you need to break down walls, get out of your comfort zone, and pursue the best avenue for the business.

  1. Talent – Continuously sharpen the knife’s edge

Talent is in high demand, so it can be challenging to attract the top talent. One way to overcome this is to develop talent internally. All cloud providers offer certifications, and incentivizing employees to go out there and get those certifications goes a long way. With that, success breeds success. Celebrate and evangelize early wins!

The cloud changes fast, so you need to continuously retrain and relearn. And as a bonus – those individuals that are involved in the CCoE have the unique opportunity to learn and grow outside of their area of expertise, so proactively volunteer to be a part of that group.

If you want more detailed information in any of these five areas, we have a wealth of customer examples we’d love to jump into with you. Contact us to start the conversation.

-Ian Willoughby, Chief Architect and Skip Barry, Executive Cloud Enablement Director

Facebooktwitterlinkedinmailrss

Cloud for New Users – The 4 Most Important Lessons Learned Over a Decade

Over the past ten years we’ve learned quite a bit about cloud migration and achieving success across various platforms. Over that time, a lot has changed, and ongoing innovations continue to provide new opportunities for the enterprise. Here, we’re recapping the four most important lessons we’ve learned for new cloud users.

1. Close the knowledge gap.

With the rate of innovation in the cloud, the knowledge gap is wider than ever, but that innovation has reduced complexity in many ways. To maximize these innovations, businesses must incentivize employees to continue developing new skills.

Certifications and a desire to continue learning and earning credentials are the traits businesses want in their IT employees. Fostering a company culture that encourages experimentation, growth, and embracing new challenges creates an environment that helps employees develop to the next level.

At 2nd Watch, we create a ladder of success that challenges associates to move from intermediate to advanced capabilities. We foster employees’ natural inclinations and curiosities to build on their passions. Exposing people to new opportunities is a great way to invest in their aptitudes and backgrounds to evolve with the company. One way to do this is by setting up a Cloud Center of Excellence (CCOE), a multi-stakeholder group that includes subject matter experts from various areas of the business. With the multi-skilled group, the collective become the subject matter experts in cloud services and solutions. By setting up a CCOE, silos are eliminated and teams work together in an iterative fashion to promote the cloud as a transformative tool.

2. Assemble the right solutions.

Cloud is not always cheaper. If you migrate to the cloud without mapping to the right solutions, you risk increasing cost. For example, if you come from a monolithic architectural environment, it can be tempting to try and recreate that architecture in the cloud.

But, different than your traditional on-prem environment, many resources in the cloud do not require a persistent state. You have the freedom to allow jobs like big data and ETL (extract, transform and load) to run just once a day, rather than 24 hours a day. If you need it for an hour, spin it up for the hour, access your data in your cloud provider’s storage area, then turn it off to minimize usage and costs.

You can also perform simple tweaks to your architecture to improve performance. We recommend exploring containerization and serverless models to implement automation where possible. New cloud users should adapt to the new environment to allow for future use cases, provision resources for future states, and use assets based on scalability. Cloud allows you to map solutions to scale. Partners like 2nd Watch help create a roadmap based on forecasting from current usage.

3. Combine services based on desired outcomes.

There is a plethora of cloud service options available, and the way you use them should be driven by the outcomes you want. Are you looking to upgrade? Lift and shift? Advance the business forward? Once you have a clear outcome defined, you can begin your cloud journey with that goal in mind and start planning how best to use each cloud service.

4. Take an active role in the shared responsibility model.

In traditional IT environments, security falls solely on the company, but as a cloud user, the model is significantly different. Many cloud service providers utilize a shared security responsibility model where both the cloud provider and the user take ownership over different areas of security.

Often times, cloud providers can offer more security than your traditional datacenter environment ever could. For example, you are not even permitted to see your cloud provider’s data center. Their locations are not known to the public, nor is where your customer data resides known to the datacenter employees.

Although your cloud provider handles much of the heavy lifting, it’s your responsibility to architect your applications correctly. You need to ensure your data is being put into the appropriate areas with the proper roles and responsibilities for access.

Are you ready to explore your options in the cloud? Contact 2nd Watch to learn more about migration, cloud enabled automation, and our multi-layered approach to security.

-Ian Willoughby, Chief Architect and Skip Barry, Executive Cloud Enablement Director

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: Examining the Cloud Center of Excellence

What is a Cloud Center of Excellence (CCOE), and how can you ensure its success? Joe Kinsella, CTO of CloudHealth, talks with us today about the importance of a CCOE, the steps to cloud maturity, and how to move through the cloud maturity journey. We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: Diving into Data Lakes and Data Platforms

Data Engineering and Analytics expert, Rob Whelan, joins us today to dive into all things data lakes and data platforms. Data is the key to unlocking the path to better business decisions. What do you need data for? We look at the top 5 problems customers have with their data, how the cloud has helped solve these challenges, and how you can leverage the cloud for your data use. We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

Facebooktwitterlinkedinmailrss