1-888-317-7920 info@2ndwatch.com

3 Productivity-Killing Data Problems and How to Solve Them

With the typical enterprise using over 1,000 Software as a Service applications (source: Kleiner Perkins), each with its own private database, it’s no wonder people complain their data is siloed. Picture a thousand little silos, all locked up!

Number of cloud applications used per enterprise, by industry vertical

Then, imagine you start building a dashboard out of all those data silos. You’re squinting at it and wondering, can I trust this dashboard? You placate yourself because at least you have data to look at, but this creates more questions for which data doesn’t yet exist.

If you’re in a competitive industry, and we all are, you need to take your data analysis to the next level. You’re either gaining competitive advantage over your competition or being left behind.

As a business leader, you need data to support your decisions. These three data complexities are at the core of every leader’s difficulties with gaining business advantages from data:

  1. Siloed data
  2. Untrustworthy data
  3. No data

 

  1. Siloed data

Do you have trouble seeing your data at all? Are you mentally scanning your systems and realizing just how many different databases you have? A recent customer of ours was collecting reams of data from their industrial operations but couldn’t derive the data’s value due to the siloed nature of their datacenter database. The data couldn’t reach any dashboard in any meaningful way. It is a common problem. With enterprise data doubling every few years, it takes modern tools and strategies to keep up with it.

For our customer, we started with defining the business purpose of their industrial data – to predict demand in the coming months so they didn’t have a shortfall. That business purpose, which had team buy-in at multiple corporate levels, drove the entire engagement. It allowed us to keep the technology simple and focused on the outcome.

One month into the engagement, they had clean, trustworthy, valuable data in a dashboard. Their data was unlocked from the database and published.

Siloed data takes some elbow grease to access, but it becomes a lot easier if you have a goal in mind for the data. It cuts through noise and helps you make decisions more easily if you know where you are going.

  1. Untrustworthy data

Do you have trouble trusting your data? You have a dashboard, yet you’re pretty sure the data is wrong, or lots of it is missing. You can’t take action on it, because you hesitate to trust it. Data trustworthiness is a prerequisite for making your data action oriented. But, most data has problems – missing values, invalid dates, duplicate values, and meaningless entries. If you don’t trust the numbers, you’re better off without the data.

Data is there for you to take action on, so you should be able to trust it. One key strategy is to not bog down your team with maintaining systems, but rather use simple, maintainable, cloud-based systems that use modern tools to make your dashboard real.

  1. No data

Often you don’t even have the data you need to make a decision. “No data” comes in many forms:

  • You don’t track it. For example, you’re an ecommerce company that wants to understand how email campaigns can help your sales, but you don’t have a customer email list.
  • You track it but you can’t access it. For example, you start collecting emails from customers, but your email SaaS system doesn’t let you export your emails. Your data is so “siloed” that it effectively doesn’t exist for analysis.
  • You track it but need to do some calculations before you can use it. For example, you have a full customer email list, a list of product purchases, and you just need to join the two together. This is a great place to be and is where we see the vast majority of customers.

That means finding patterns and insights not just within datasets, but across datasets. This is only possible with a modern, cloud-native data lake.

The solution: define your business need and build a data lake

Step one for any data project – today, tomorrow and forever – is to define your business need.

Do you need to understand your customer better? Whether it is click behavior, email campaign engagement, order history, or customer service, your customer generates more data today than ever before that can give you clues as to what she cares about.

Do you need to understand your costs better? Most enterprises have hundreds of SaaS applications generating data from internal operations. Whether it is manufacturing, purchasing, supply chain, finance, engineering, or customer service, your organization is generating data at a rapid pace.

(AWS :What is a Data Lake?)

Don’t be overwhelmed. You can cut through the noise by defining your business case.

The second step in your data project is to take that business case and make it real in a cloud-native data lake. Yes, a data lake. I know the term has been abused over the years, but a data lake is very simple; it’s a way to centrally store all (all!) of your organization’s data, cheaply, in open source formats to make it easy to access from any direction.

Data lakes used to be expensive, difficult to manage, and bulky. Now, all major cloud providers (AWS, Azure, GCP) have established best practices to keep storage dirt-cheap and data accessible and very flexible to work with. But data lakes are still hard to implement and require specialized, focused knowledge of data architecture.

How does a data lake solve these three problems?

  1. Data lakes de-silo your data. Since the data stored in your data lake is all in the same spot, in open-source formats like JSON and CSV, there aren’t any technological walls to overcome. You can query everything in your data lake from a single SQL client. If you can’t, then that data is not in your data lake and you should bring it in.
  2. Data lakes give you visibility into data quality. Modern data lakes and expert consultants build in a variety of checks for data validation, completeness, lineage, and schema drift. These are all important concepts that together tell you if your data is valuable or garbage. These sorts of patterns work together nicely in a modern, cloud-native data lake.
  3. Data lakes welcome data from anywhere and allow for flexible analysis across your entire data catalog. If you can format your data into CSV, JSON, or XML, then you can put it in your data lake. This solves the problem of “no data.” It is very easy to create the relevant data, either by finding it in your organization, or engineering it by analyzing across your data sets. An example would be joining data from Sales (your CRM) and Customer Service (Zendesk) to find out which product category has the best or worst customer satisfaction scores.

The 2nd Watch Dataops Foundation Platform

You should only build a data lake if you have clear business outcomes in mind. Most cloud consulting partners will robotically build a bulky data lake without any thought to the business outcome. What sets 2nd Watch apart is our focus on your business needs. Do you need to make better decisions? Speed up a process? Reduce costs somewhere? We keep your goal front and center throughout the entire engagement. We’ve deployed data lakes dozens of times for enterprises with this unique focus in mind.

Our ready-to-deploy data lake captures years of cloud experience and best practices, with integration from governance to data exploration and storage. We explain the reasons behind the decisions and make changes based on your requirements, while ingesting data from multiple sources and exploring it as soon as possible. In the above image, the core of the data lake are the three zones represented by green S3 bucket squares.

Here is a tour of each zone:

  • Drop Zone: As the “single source of truth,” this is a copy of your data in its most raw format, always available to verify what the actual truth is. Place data here with minimal or no formatting. For example, you can take a daily “dump” of a relational database in CSV format.
  • Analytics Zone: To support general analytics, data in the Analytics Zone is compressed and reformatted for fast analytics. From here, you can use a single SQL Client, like Athena, to run SQL queries over your entire enterprise dataset — all from a single place. This is the core value add of your data lake.
  • Curated Zone: The “golden” or final, polished, most-valued datasets for your company go here. This is where you save and refresh data that will be used for dashboards or turned into visualizations.

Our Classic 3-zone data lake on S3 features immutable data by default. You’ll never lose data, nor do you have to configure a lot of settings to accomplish this. Using AWS Glue, data is automatically compressed and archived to minimize storage costs. Convenient search with always-up-to-date data catalog allows you to easily discover all your enterprise datasets.

In the Curated Zone, only the most important “data marts” – approved datasets – get loaded into more costly Redshift or RDS, minimizing costs and complexity. And with Amazon SageMaker, tapping into your Analytics and Curated Zone, you are prepared for effective machine learning. One of the most overlooked aspects of machine learning and advanced analytics is the great importance of clean, available data. Our data lake solves that issue.

If you’re struggling with one of these three core data issues, the solution is to start with a crisp definition of your business need, and then build a data lake to execute on that need. A data lake is just a central repository for flexible and cheap data storage. If you focus on keeping your data lake simple and geared towards the analysis you need for your business, these three core data problems will be a thing of the past.

If you want more information on creating a data lake for your business, download our DataOps Foundation datasheet to learn about our 4-8 week engagement that helps you build a flexible, scalable data lake for centralizing, exploring and reporting on your data.

-Rob Whelan, Practice Manager, Data Engineering & Analytics

 

 

Facebooktwitterlinkedinmailrss

Cloud for Advanced Users – The 5 Most Important Lessons Learned Over a Decade

Being involved in cloud services and working closely with cloud providers over the past 10 years has given us a great deal of insight into the triumphs and pitfalls of cloud consumers. We’ve distilled that vast experience and come up with our list of the 5 most important lessons we’ve learned over the past decade for users that are experienced in the cloud with multiple applications/workloads running.

  1. Governance – Tagging, Tools, and Automation

Many of our customers have hundreds, if not thousands of accounts, and we’ve helped them solve many of their governance challenges. One challenge is ensuring they’re not doing certain things – for example, shadow IT and functioning in siloes. In the cloud, you want everyone to have visibility into best practices and understanding the critical role cloud plays in creating business value.

There are numerous tools and automation methods you can leverage to ensure your governance is in step with the latest innovation. First and foremost, a strong tagging strategy is critical. As with shadow IT, if you don’t tag things correctly, your teams can spin up resources with limited visibility on who owns them, continuously running and accumulating expenses over time. If you don’t start with a tagging strategy from day one, retroactively correcting is a herculean task. Starting with a strong architectural foundation and making sure that foundation stays in place with the proper tools will ensure governance doesn’t become a burden.

Putting the proper guardrails in place for this, such as AWS Config, can help overcome this challenge and make sure everybody’s following the rules. Sometimes governance and moving fast can seem like adversaries, but automation can help satisfy both.

  1. Optimization – It’s not a one-time exercise

Cloud users tend to think of optimization in terms of Reserved Instances (RI), but it reaches far beyond just RIs. Well-defined policies must exist to exhibit control over spend and discipline to go along with policies.

There are many ways to leverage cloud native solutions and products to achieve optimization as well as new classes of service. One key point is leveraging the right resources where appropriate. As new services come out and skills increase within organizations, the opportunity to not only optimize spend but optimize the applications themselves by leveraging more cloud native services will continue to drive down operating cost.

Optimization is not a one-time exercise, either. It’s an ongoing practice that needs to be done on a regular basis. Like cleaning out the garage, you need to maintain it. Who’s responsible for this? Often, it’s your company’s Cloud Center of Excellence, or a partner like 2nd Watch.

  1. Cloud Center of Excellence – Be bold and challenge the norm

We encourage all organizations to form a Cloud Center of Excellence (CCoE). Typically lead by an executive, your CCoE should be a multi-stakeholder organization that includes representatives from all areas of the business. With the multi-skilled group, you benefit from subject matter experts across a wide variety of areas within your organization who collectively become subject matter experts in cloud services and solutions. When you break down siloes, you’re able to move rapidly.

Your CCoE should be formed at the beginning of your migration and continue to revisit new capabilities released in the cloud on an ongoing basis, updating the organization’s standards to ensure enforcement.

One of the CCoE’s biggest roles is evangelizing within the organization to ensure people are embracing the cloud and celebrating successes, whether it comes from implementing DevOps with cloud native tools or optimizing and cloud refactoring. The CCoE’s motto should is, ‘Be bold, challenge the norm, look for new ways of doing things, and celebrate BIG.’

  1. Multi-Cloud – Get out of your comfort zone

As an advanced user, you have grown up with AWS and have a solid understanding and background of AWS. You’ve learned all the acronyms for AWS and understand the products and services. But now you’re being asked to integrate another CSP provider you might not be as familiar with. How do you take that basic cloud knowledge and transition to Azure or GCP?

There’s a little bit of a learning curve, so we recommend taking a training course. Some even offer training based upon your knowledge of AWS. For example, GCP offers training for AWS professionals. Training can help you acclimate to the nomenclature and technology differences between CSPs.

We typically see customers go deep with one cloud provider, and that tends to be where most workloads reside. This can be for financial reasons or due to skills and experience. You get a greater discount when you push more things into one CSP. However, some solutions fit better in one CSP over the other. To maximize your cloud strategy, you need to break down walls, get out of your comfort zone, and pursue the best avenue for the business.

  1. Talent – Continuously sharpen the knife’s edge

Talent is in high demand, so it can be challenging to attract the top talent. One way to overcome this is to develop talent internally. All cloud providers offer certifications, and incentivizing employees to go out there and get those certifications goes a long way. With that, success breeds success. Celebrate and evangelize early wins!

The cloud changes fast, so you need to continuously retrain and relearn. And as a bonus – those individuals that are involved in the CCoE have the unique opportunity to learn and grow outside of their area of expertise, so proactively volunteer to be a part of that group.

If you want more detailed information in any of these five areas, we have a wealth of customer examples we’d love to jump into with you. Contact us to start the conversation.

-Ian Willoughby, Chief Architect and Skip Barry, Executive Cloud Enablement Director

Facebooktwitterlinkedinmailrss

Cloud for New Users – The 4 Most Important Lessons Learned Over a Decade

Over the past ten years we’ve learned quite a bit about cloud migration and achieving success across various platforms. Over that time, a lot has changed, and ongoing innovations continue to provide new opportunities for the enterprise. Here, we’re recapping the four most important lessons we’ve learned for new cloud users.

1. Close the knowledge gap.

With the rate of innovation in the cloud, the knowledge gap is wider than ever, but that innovation has reduced complexity in many ways. To maximize these innovations, businesses must incentivize employees to continue developing new skills.

Certifications and a desire to continue learning and earning credentials are the traits businesses want in their IT employees. Fostering a company culture that encourages experimentation, growth, and embracing new challenges creates an environment that helps employees develop to the next level.

At 2nd Watch, we create a ladder of success that challenges associates to move from intermediate to advanced capabilities. We foster employees’ natural inclinations and curiosities to build on their passions. Exposing people to new opportunities is a great way to invest in their aptitudes and backgrounds to evolve with the company. One way to do this is by setting up a Cloud Center of Excellence (CCOE), a multi-stakeholder group that includes subject matter experts from various areas of the business. With the multi-skilled group, the collective become the subject matter experts in cloud services and solutions. By setting up a CCOE, silos are eliminated and teams work together in an iterative fashion to promote the cloud as a transformative tool.

2. Assemble the right solutions.

Cloud is not always cheaper. If you migrate to the cloud without mapping to the right solutions, you risk increasing cost. For example, if you come from a monolithic architectural environment, it can be tempting to try and recreate that architecture in the cloud.

But, different than your traditional on-prem environment, many resources in the cloud do not require a persistent state. You have the freedom to allow jobs like big data and ETL (extract, transform and load) to run just once a day, rather than 24 hours a day. If you need it for an hour, spin it up for the hour, access your data in your cloud provider’s storage area, then turn it off to minimize usage and costs.

You can also perform simple tweaks to your architecture to improve performance. We recommend exploring containerization and serverless models to implement automation where possible. New cloud users should adapt to the new environment to allow for future use cases, provision resources for future states, and use assets based on scalability. Cloud allows you to map solutions to scale. Partners like 2nd Watch help create a roadmap based on forecasting from current usage.

3. Combine services based on desired outcomes.

There is a plethora of cloud service options available, and the way you use them should be driven by the outcomes you want. Are you looking to upgrade? Lift and shift? Advance the business forward? Once you have a clear outcome defined, you can begin your cloud journey with that goal in mind and start planning how best to use each cloud service.

4. Take an active role in the shared responsibility model.

In traditional IT environments, security falls solely on the company, but as a cloud user, the model is significantly different. Many cloud service providers utilize a shared security responsibility model where both the cloud provider and the user take ownership over different areas of security.

Often times, cloud providers can offer more security than your traditional datacenter environment ever could. For example, you are not even permitted to see your cloud provider’s data center. Their locations are not known to the public, nor is where your customer data resides known to the datacenter employees.

Although your cloud provider handles much of the heavy lifting, it’s your responsibility to architect your applications correctly. You need to ensure your data is being put into the appropriate areas with the proper roles and responsibilities for access.

Are you ready to explore your options in the cloud? Contact 2nd Watch to learn more about migration, cloud enabled automation, and our multi-layered approach to security.

-Ian Willoughby, Chief Architect and Skip Barry, Executive Cloud Enablement Director

Facebooktwitterlinkedinmailrss

Fully-Managed DevOps – Is It Possible?

If you’re in a development or operations role, you probably gawked at this title. The truth is, having some other company manage your “DevOps” is an insult to the term. However, bear with me while I put out this scenario:

  • What if you don’t have a team that can manage all your tools that enable you to adopt DevOps methods?
  • Why should you have to spend time managing the tools you use, instead of developing and operating your application?
  • What if your team isn’t ready for this big cultural, process, and tooling change or disagrees on where to begin?

These are key reasons to consider adopting a DevOps platform managed by experts.

Just a Quick Definition:

To bring you along my thought process, let’s first agree on what DevOps IS. DevOps, a term built by combining the words Development and Operations, is a set of cultural values and organizational practices implemented with the intent to improve business outcomes. DevOps methods were initially formed to bridge the gap between Development and Operations so that teams could increase speed to delivery as well as quality of product at the same time. The focus of DevOps is to increase collaboration and feedback between Business Stakeholders, Development, QA, IT or Cloud Operations, and Security to build better products or services.

When companies attempt to adopt DevOps practices, they often think of tooling first. However, a true DevOps transformation includes an evolution of your company culture, processes, collaboration, measurement systems, organizational structure, and automation and tooling — in short, things that cannot be accomplished through automation alone.

Why DevOps?
Adopting DevOps practices can be a gamechanger in your business if implemented correctly. Some of the benefits include:

  • Increase Operational Efficiencies – Simplify the software development toolchain and minimize re-work to reduce total cost of ownership.
  • Deliver Better Products Faster – Accelerate the software delivery process to quickly deliver value to your customers.
  • Reduce Security and Compliance Risk – Simplify processes to comply with internal controls and industry regulations without compromising speed.
  • Improve Product Quality, Reliability, and Performance – Limit context switching, reduce failures, and decrease MTTR while improving customer experience.

The basic goal here is to create and enable a culture of continuous improvement.

DevOps Is Not All Sunshine and Roses:

Despite the promise of DevOps, teams still struggle due to conflicting priorities and opposing goals, lackluster measurement systems, lack of communication or collaborative culture, technology sprawl creating unreliable systems, skill shortage, security bottlenecks, rework slowing progress…you get the picture. Even after attempting to solve these problems, many large enterprises face setbacks including:

  • Reliability: Their existing DevOps Toolchain is brittle, complex, and expensive to maintain.​​
  • Speed: Developers are slowed down by bottlenecks, hand-offs, and re-work.​​
  • Security: Security is slowing down their release cycle, but they still need to make sure they scan the code for licensing and vulnerabilities issues before it goes out. ​​
  • Complexity: DevOps is complex and an ongoing process. They don’t currently have the internal skillset to start or continue their progress. ​
  • Enterprise Ready: SaaS DevOps offerings do not enable them to have privacy or features they require for enterprise security and management.

Enter Managed DevOps:

Managed DevOps removes much of this complexity by providing you with a proven framework for success beginning with an assessment that sets the go-forward strategy, working on team upskilling, implementing end-to-end tooling, and then finally providing ongoing management and coaching.

If you have these symptoms, Managed DevOps is the cure:

  • Non-Existent or Brittle Pipeline
  • Tools are a Time Suck; No time to focus on application features
  • You know change is necessary, but your team disagrees on where to begin

Because Managed DevOps helps bring your teams along the change curve by providing the key upskilling and support, plus a proven tool-chain, you can kick off immediately without spending months debating tooling or process.

If you’re ready to remove the painful complexity and start to build, test, and deploy applications in the cloud in a continuous and automated way, talk with our DevOps experts about implementing a Managed DevOps solution.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

DataOps: Get your data out of silos and into the middle of the action.

Are you facing pressure to make better decisions, faster? Are you uneasy about making too many gut-level business decisions? Are you being asked to have a data strategy from above and wondering how to compete in a data-driven world?

You are not alone. These are common themes emerging in today’s digital economy. Customers of all kinds – from consumers to enterprise businesses – have greater and greater choices than ever before. That means your customers are demanding more service, faster, and at a higher quality. How you decide to meet these needs is becoming very complex. You need to choose among many competing options. Increasingly, making these decisions by trusting your gut is a recipe for disaster!

These difficult decisions are not made any easier with the rise of Software as a Service (SaaS). While it’s easy to get up and going with SaaS offerings to handle business productivity needs, with every new SaaS offering you use, you end up silo-ing your data even more. Every department, every business function, has multiple data silos that make holistic business analysis an uphill climb. How can you tie together customer satisfaction and operations data, if the data is in two different systems?

Can you find the data you need? Once you find it, do you trust it? It just shouldn’t be this hard to make business decisions!

We know this is a common problem, because we hear it over and over again from our customers. We continue to hear about this problem, despite the relative maturity of “big data” systems. If big data has been a thing for at least two decades, why are we still struggling to make sense of it all? Our diagnosis is pretty simple:

  1. Data projects that lack a business goal will fail, and most data projects lack a clear business goal, such as “increasing customer satisfaction.”
  2. It’s hard to find people to do the hard work of connecting systems and pulling data out.

So, despite fantastic big data ecosystems being widely available, if you lack a clear business objective and you can’t assign people to roll up their sleeves and move data to where it needs to be, then unfortunately your data initiative will die on the vine.

Our solution to this is very straightforward:

  1. We start with the business goal and never put it on the back burner. Our consultants are trained to listen for and capture business objectives from your team (and people around your team) and hang onto them tightly, while allowing flexibility when it comes to the implementation details. This is very rare in cloud consulting. Most cloud consultancies miss the business goals and skip straight to engineering. We think this is unacceptable and have seen it lead to purposeless, cash-hemorrhaging projects.
  2. We then rapidly get to work and implement our best-practices DataOps solution. It’s pre-built, uses 100% serverless AWS offerings, and is battle-tested over dozens of successful deployments and years of incorporating AWS best practices. Since it is serverless, scaling your DataOps foundation to dozens or hundreds of data sources is painless.
  3. Then, we connect your first several data sources, such as Salesforce, or logs, or customer data, or whatever we together have identified will support your business use case. This is the hard work of rolling up your sleeves, and we have the people to do it.
  4. Within the first two weeks, most customers are analyzing data from multiple sources in a single pane of glass.
  5. Finally, we make your analytics production-ready and help you share the good news around your organization.

These are the benefits that our customers have told us they have received.

  1. You can make better, data-driven decisions. Since we start and end the engagement with your business focus in mind, you are able to make better, data-informed decisions. Where before you were trusting your gut, now you have real, relevant, current data to support your decision making. You’re not driving blind.
  2. You can trust your dashboards and reports. Since we have implemented a best-practices Data Catalog, you have a crystal-clear picture of how your data got to its end state. You are not questioning “is this data real?” because you have clear traceability of data from source to metrics. If you can’t trust your data when you try to act on it, what’s the point?
  3. Your analysis gets even better with yet more data sources. Now that you have a central data lake with easy-to-replicate patterns for bringing in new data, you can make your analyses even richer by adding yet more sources. Many of our customers enrich their data with a wide variety of internal sources, and even external sources like weather and macroeconomic data, to find new correlations and trends that were not possible before.
  4. You feed a culture of DataOps. Word will get around that your team has the ability to drastically simplify data access and analysis because our DataOps Foundation comes with commonsense access rules right out of the box. It is not a threat to give access to the right people – it will help your business operate. This tends to have a flywheel effect. Other departments get excited and want to add their data; analyses get better and richer; then even more people want to bring in their data.
  5. You are now AI-ready. If all the analytical benefits were not enough, you are now also ready for AI and machine learning (ML). It’s just not possible to perform any kind of AI with messy data. With our DataOps Solution, you have solved two problems at once – you have action-ready business data, and you have cleared the path for repeatable AI projects.

You are not alone if you still can’t get the data you need. If your data still feels invisible to you, and you don’t think it should be so hard to crunch data for business outcomes, then you should know that there is a better way. Our DataOps Solution puts your business goals front and center. Our straightforward engagement has you centralizing and analyzing data, in the cloud, securely, within a week or two. Then, you can add more sources to your heart’s content and enjoy the benefits of being data-driven and AI-ready in today’s demanding economy.

To get started, contact us to book a discussion and a demo.

-Rob Whelan, Practice Manager, Data Engineering & Analytics

Facebooktwitterlinkedinmailrss

How Cherwell Software Improved Customer Experience with AMS

2nd Watch helped Cherwell Software onboard to AWS Managed Services (AMS) to provide a holistic approach to SaaS architecture and improve their customer experience. When managing infrastructure was taking away from Cherwell’s product development processes, 2nd Watch served as a consulting partner in developing their strategy and engagement to onboarding to AMS quickly, enabling them to provide great service management experience to its customers.

Facebooktwitterlinkedinmailrss