1-888-317-7920 info@2ndwatch.com

Cloud Crunch Podcast: Azure Cloud Adoption Framework (CAF)

On today’s episode of Cloud Crunch, Farida Bharmal, Microsoft One Commercial Partner, joins us to talk about Microsoft’s Cloud Adoption Framework (CAF). We discuss the components of CAF including strategy, plan, ready, migrate, innovate, govern and manage, and where customers are struggling and how they’re overcoming those challenges. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: Hybrid Cloud Computing

This week on Cloud Crunch, we welcome our first guest, Dusty Simoni, Sr Product Manager at 2nd Watch, to discuss hybrid cloud computing. We dive into what hybrid cloud is is, examples of hybrid, benefits, complexities, and how to get started. For this conversation, we look at hybrid cloud as on-premises infrastructure and public cloud – specifically around AWS, Azure and VMware – and exclude private cloud services. Listen now on Spotify, iHeart Radio, iTunes, or wherever you get your podcasts.

Facebooktwitterlinkedinmailrss

Introducing the New Cloud Crunch Podcast

We’re excited to announce we’ve launched a new podcast focused on all things cloud! The Cloud Crunch podcast is intended to add value to any large enterprise that is planning on moving to, or is currently in the midst of moving to, the cloud.  We share our decade-long experience with listeners as well as current market trends, customer perspectives, thought leadership, partner landscape and third-party tools.  We dive deep and explore best practices as well as how customers like you are overcoming challenges including organizational transformation, job skills, changing role of IT professionals and future technologies: events, news and trends. Join our 2nd Watch hosts Jeff Aden, Co-Founder and EVP, Ian Willoughby, Chief Architect, and Skip Barry, Executive Director of Cloud Enablement, as we delve through a multitude of topics within the cloud landscape. Listen now on Spotify, iHeart Radio, iTunes, or wherever you get your podcasts.

 

Facebooktwitterlinkedinmailrss

Gartner Report: How to Cultivate Effective ‘Remote Work’ Programs

Creating and accelerating the productivity of your distributed workforce is a competitive advantage in today’s business environment, but if the emphasis is on the wrong end goal, efforts will fail.

Gartner describes the essential first steps of creating remote work programs including how to empower employees and managers to be effective, preparing employees for remote work demands, setting expectations, and stress-testing your technology infrastructure to ensure remote work support capabilities.

Access the report to learn more and ensure you’re set up for remote work success.

Facebooktwitterlinkedinmailrss

What is DevOps to Us?

What is DevOps to us? Our experts talk automating deployment of applications and infrastructure, including CI/CD pipelines, and the most popular DevOps tools.

Facebooktwitterlinkedinmailrss

CCPA and the cloud

Since the EU introduced the General Data Protection Regulation (GDPR) in 2018, all eyes have been on the U.S. to see if it will follow suit. While a number of states have enacted data privacy statutes, California’s Consumer Protection Act (CCPA) is the most comprehensive U.S. state law to date. Entities were expected to be in compliance with CCPA as of January 1, 2020.

CCPA compliance requires entities to think about how the regulation will impact their cloud infrastructures and development of cloud-native applications. Specifically, companies must understand where personally identifiable information (PII) and other private data lives, and how to process, validate, complete, and communicate consumer information and consent requests.

What is CCPA and how to ensure compliance

CCPA gives California residents greater privacy rights their data that is collected by companies. It applies to any business with customers in California and that either has gross revenues over $25 million or that acquires personal information from more than 50,000 consumers per year. It also applies to companies that earn more than half their annual revenue selling consumers’ personal information.

In order to ensure compliance, the first thing firms should look at is whether they are collecting PII, and if they are, ensuring they know exactly where it is going. CCPA not only mandates that California consumers have the right to know what PII is being collected, it also states that customers can dictate whether it’s sold or deleted. Further, if a company suffers a security breach, California consumers have the right to sue that company under the state’s data notification law. This increases the potential liability for companies whose security is breached, especially if their security practices do not conform to industry standards.

Regulations regarding data privacy are proliferating and it is imperative that companies set up an infrastructure foundation which help them evolve fluidly with these changes to the legal landscape, as opposed to “frankensteining” their environments to play catch up. The first is data mapping in order to know where all consumer PII lives and, importantly, where California consumer PII lives. This requires geographic segmentation of the data. There are multiple tools, including cloud native ones, that empower companies with PII discovery and mapping. Secondly, organizations will need to have a data deletion mechanism in place and an audit trail for data requests, so that they can prove they have investigated, validated, and adequately responded requests made under CCPA. The validation piece is also crucial – companies must make sure the individual requesting the data is who they say they are.

And thirdly, having an opt-in or out system in place that allows consumers to consent to their data being collected in the first place is essential for any company doing business in California. If the website is targeted at children, there must be a specific opt-in request for any collection of California consumer date. These three steps must be followed with an audit trail that can validate each of them.

The cloud

It’s here that we start to consider the impact on cloud journeys and cloud-native apps, as this is where firms can start to leverage tools that that Amazon or Azure, for example, currently have, but that haven’t been integral for most businesses in a day-to-day context, until now. This includes AI learning tools for data discovery, which will help companies know exactly where PII lives, so that they may efficiently comply with data subject requests.

Likewise, cloud infrastructures should be set up so that firms aren’t playing catch up later on when data privacy and security legislation is enacted elsewhere. For example, encrypt everything, as well as making sure access control permissions are up to date. Organizations must also prevent configuration drift with tools that will automate closing up a security gap or port if one gets opened during development.

For application development teams, it’s vital to follow security best practices, such as CIS benchmarks, NIST standards and the OWASP Top Ten. These teams will be getting the brunt of the workload in terms of developing website opt-out mechanisms, for example, so they must follow best practices and be organized, prepared, and efficient.

The channel and the cloud

For channel partners, there are a number of considerations when it comes to CCPA and the cloud. For one, partners who are in the business of infrastructure consulting should know how the legislation affects their infrastructure and what tools are available to set up a client with an infrastructure that can handle the requests CCPA mandates.

This means having data discovery tools in place, which can be accomplished with both cloud native versions and third party software. Also, making sure notification mechanisms are in place, such as email, or if you’re on Amazon, SNS (Simple Notification Service). Notification mechanisms will help automate responding to data subject requests. Additionally, logging must be enabled to establish an audit trail. Consistent resource tagging and establishing global tagging policies is integral to data mapping and quickly finding data. There’s a lot from an infrastructure perspective that can be done, so firms should familiarize themselves with tools that can facilitate CCPA compliance that may have never been used in this fashion, or indeed at all.

Ultimately, when it comes to CCPA, don’t sleep on it. GDPR went into effect less than two years ago, and already we have seen huge fines doled out to the likes of British Airways and Google for compliance failures. The EU has been aggressive about ensuring compliance, and California is likely to follow the same game. They know that in order to give CCPA any teeth, they have to make sure that they prosecute it.

If you’re interested in learning more about how privacy laws might affect cloud development, watch our “CCPA: State Privacy Law Effects on Cloud Development” webinar on-demand, at your convenience.

– Victoria Geronimo, Product Manager – Security & Compliance

Facebooktwitterlinkedinmailrss

Amazon Forecast: Best Practices

In part one of this article, we offered an overview of Amazon Forecast and how to use it. In part two, we get into Amazon Forecast best practices:

Know your business goal

In our data and analytics practice, business value comes first. We want to know and clarify use cases before we talk about technology. Using amazon Forecast is no different. When creating a forecast, do you want to make sure you always have enough inventory on hand? Or do you want to make sure that all your inventory gets used all the time? This will drive which “quartile” you look at.

Each quartile – the defaults are 10%, 50%, and 90% – is important for its own reasons and should be looked at to give a range. What is the 50% quartile? The forecast at this quartile has a 50-50 chance of being right. The real number has a 50% chance of being higher and a 50% chance of being lower than the actual value. The forecast at the 90% quartile has a 90% chance of being higher than the actual value, while the forecast at the 10% quartile has only a 10% chance of being higher. So, if you want to make sure you sell all your inventory, use the 10% quartile forecast.

Use related time series

Amazon has made Forecast so easy to use with related time series, you really have nothing to lose to make your forecast more robust. All you have to do is make the time series time units the same as your target time series.

One way to create a related dataset is to use categorical or binary data whose future values are already known – for example, whether the future time is on a weekend or a holiday or there is a concert playing – anything that is on a schedule that you can rely on.

Even if you don’t know if something will happen, you can create multiple forecasts where you vary the future values. For example, if you want to forecast attendance at a baseball game this Sunday, and you want to model the impact of weather, you could create a feature is_raining and try one forecast with “yes, it’s raining” and another with “no, it’s not raining.”

Look at a range of forecasted values, not a singular forecasted value

Don’t expect the numbers to be precise. One of the biggest values from a forecast is knowing what the likely range of actual values will be. Then, take some time to analyze what drives that range. Can it be made smaller (more accurate) with more related data? If so, can you control any of that related data?

Visualize the results

Show historical and forecast values on one chart. This will give you a sense of how the forecast is trending. You can backfill the chart with actuals as they come in, so you can learn more about your forecast’s accuracy.

Choose a “medium term” time horizon

Your time horizon – how far in the future your forecast looks – is either 500 timesteps or ⅓ of your time series data, whichever is smaller. We recommend choosing up to a 10% horizon for starters. This will give you enough forward-looking forecasts to evaluate the usefulness of your results without taking too long.

Save your data prep code

Save the code you use to stage your data for the forecast for the future. Because you will be doing this again, you don’t want to repeat yourself. An efficient way to do this is to use PySpark code inside a Sagemaker notebook. If you end up using your forecast in production, you will eventually place that code into a Glue ETL pipeline (using PySpark), so it is best to just use PySpark out of the box.

Another advantage of using PySpark is that the utilities to load and drop csv-formatted data to/from S3 are dead simple. You will be using CSV for Forecasting work.

Interpret the results!

The guide to interpret results is here, but admittedly it is a little dense if you are not a statistician. One easy metric to look at, especially if you use multiple algorithms, is Root Mean Squared Error (RMSE). You want this as low as possible, and, in fact, Amazon will choose its winning algorithm mostly on this value.

It will take some time

How long will it take? If you do select AutoML, expect model training to take a while – at least 20 minutes for even the smallest datasets. If your dataset is large, it can take an hour or several hours. The same is true when you generate the actual forecast. So, start it in the beginning of the day so you can work with it before lunch, or near the end of your day so you can look at it in the morning.

Data prep details (for your data engineer)

  • Match the ‘forecast frequency’ to the frequency of your observation timestamps.
  • Set the demand datatype to a float prior to import (it might be an integer).
  • Get comfortable with `striptime` and `strftime` – you have only two options for timestamp format.
  • Assume all data are from the same time zone. If they are not, make them that way. Use python datetime methods.
  • Split out a validation set like this: https://github.com/aws-samples/amazon-forecast-samples/blob/master/notebooks/1.Getting_Data_Ready.ipynb
  • If using pandas dataframes, do not use the index when writing to csv.

Conclusion

If you’re ever asked to produce a forecast or predict some number in the future, you now have a robust method at your fingertips to get there. With Amazon Forecast, you have access to Amazon.com’s optimized algorithms for time series forecasting. If you can get your target data into CSV format, then you can use a forecast. Before you start, have a business goal in mind – it is essential to think about ranges of possibilities rather than a discrete number. And be sure to keep in mind our best practices for creating a forecast, such as using a “medium term” time horizon, visualizing the results, and saving your data preparation code.

If you’re ready to make better, data-driven decisions, trust your dashboards and reports, confidently bring in new sources for enhanced analysis, create a culture of DataOps, and become AI-ready, contact us to schedule a demo of our DataOps Foundation.

-Rob Whelan, Practice Director, Data & Analytics

Facebooktwitterlinkedinmailrss

Fully-Managed DevOps – Is It Possible?

If you’re in a development or operations role, you probably gawked at this title. The truth is, having some other company manage your “DevOps” is an insult to the term. However, bear with me while I put out this scenario:

  • What if you don’t have a team that can manage all your tools that enable you to adopt DevOps methods?
  • Why should you have to spend time managing the tools you use, instead of developing and operating your application?
  • What if your team isn’t ready for this big cultural, process, and tooling change or disagrees on where to begin?

These are key reasons to consider adopting a DevOps platform managed by experts.

Just a Quick Definition:

To bring you along my thought process, let’s first agree on what DevOps IS. DevOps, a term built by combining the words Development and Operations, is a set of cultural values and organizational practices implemented with the intent to improve business outcomes. DevOps methods were initially formed to bridge the gap between Development and Operations so that teams could increase speed to delivery as well as quality of product at the same time. The focus of DevOps is to increase collaboration and feedback between Business Stakeholders, Development, QA, IT or Cloud Operations, and Security to build better products or services.

When companies attempt to adopt DevOps practices, they often think of tooling first. However, a true DevOps transformation includes an evolution of your company culture, processes, collaboration, measurement systems, organizational structure, and automation and tooling — in short, things that cannot be accomplished through automation alone.

Why DevOps?
Adopting DevOps practices can be a gamechanger in your business if implemented correctly. Some of the benefits include:

  • Increase Operational Efficiencies – Simplify the software development toolchain and minimize re-work to reduce total cost of ownership.
  • Deliver Better Products Faster – Accelerate the software delivery process to quickly deliver value to your customers.
  • Reduce Security and Compliance Risk – Simplify processes to comply with internal controls and industry regulations without compromising speed.
  • Improve Product Quality, Reliability, and Performance – Limit context switching, reduce failures, and decrease MTTR while improving customer experience.

The basic goal here is to create and enable a culture of continuous improvement.

DevOps Is Not All Sunshine and Roses:

Despite the promise of DevOps, teams still struggle due to conflicting priorities and opposing goals, lackluster measurement systems, lack of communication or collaborative culture, technology sprawl creating unreliable systems, skill shortage, security bottlenecks, rework slowing progress…you get the picture. Even after attempting to solve these problems, many large enterprises face setbacks including:

  • Reliability: Their existing DevOps Toolchain is brittle, complex, and expensive to maintain.​​
  • Speed: Developers are slowed down by bottlenecks, hand-offs, and re-work.​​
  • Security: Security is slowing down their release cycle, but they still need to make sure they scan the code for licensing and vulnerabilities issues before it goes out. ​​
  • Complexity: DevOps is complex and an ongoing process. They don’t currently have the internal skillset to start or continue their progress. ​
  • Enterprise Ready: SaaS DevOps offerings do not enable them to have privacy or features they require for enterprise security and management.

Enter Managed DevOps:

Managed DevOps removes much of this complexity by providing you with a proven framework for success beginning with an assessment that sets the go-forward strategy, working on team upskilling, implementing end-to-end tooling, and then finally providing ongoing management and coaching.

If you have these symptoms, Managed DevOps is the cure:

  • Non-Existent or Brittle Pipeline
  • Tools are a Time Suck; No time to focus on application features
  • You know change is necessary, but your team disagrees on where to begin

Because Managed DevOps helps bring your teams along the change curve by providing the key upskilling and support, plus a proven tool-chain, you can kick off immediately without spending months debating tooling or process.

If you’re ready to remove the painful complexity and start to build, test, and deploy applications in the cloud in a continuous and automated way, talk with our DevOps experts about implementing a Managed DevOps solution.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

How to Use Amazon Forecast for your Business

How to use Amazon Forecast: What Is it Good For?

How many times have you been asked to predict revenue for next month or next quarter? Do you mostly rely on your gut? Have you ever been asked to support your numbers? Cue sweaty palms frantically churning out spreadsheets.

Maybe you’ve suffered from the supply chain “bullwhip” effect: you order too much inventory, which makes your suppliers hustle, only to deliver a glut of product that you won’t need to replace for a long time, which makes your suppliers sit idle.

Wouldn’t it be nice to plan for your supply chain as tightly as Amazon.com does? With Amazon Forecast, you can do exactly that. In part one of this two-part article, I’ll provide an overview of the Amazon Forecast service and how to get started. Part two of the article will focus on best practices for using Amazon Forecast.

Amazon Forecast: The backstory

Amazon knows a thing or two about inventory planning, given its intense focus on operations. Over the years, it has used multiple algorithms for accurate forecasting. It even fine-tuned them to run in an optimized way on its cloud compute instances. Forecasting demand is important, if nothing else to get a “confidence interval” – a range where it’s fairly certain reality will fall, say, 80% of the time.

In true Amazon Web Services fashion, Amazon decided to provide its forecasting service for sale in Amazon Forecast, a managed service that takes your time series data in CSV format and spits out a forecast into the future. Amazon Forecast gives you a customizable confidence interval that you can set to 95%, 90%, 80%, or whatever percentage you need. And, you can re-use and re-train the model with actuals as they come in.

When you use Amazon Forecast, you can tell it to run up to five different state-of-the-art algorithms and pick a winner. This saves you the time of deliberating over which algorithm to use.

The best part about Amazon Forecast is that you can make the forecast more robust by adding in “related” time series – any data that you think is correlated to your forecast. For example, you might be predicting electricity demand based on macro scales such as season, but also on a micro level such as whether or not it rained that day.

Amazon Forecast: How to use

Amazon Forecast is considered a serverless service. You don’t have to manage any compute instances to use it. Since it is serverless, you can create multiple scenarios simultaneously – up to three at once. There is no reason to do this in series; you can come up with three scenarios and fire them off all at once. Additionally, Amazon Forecast is low-cost , so it is worth trying and experimenting with often. As is generally the case with AWS, you end up paying mostly for the underlying compute and storage, rather than any major premium for using the service. Like any other machine learning task, you have a huge advantage if you have invested in keeping your data orderly and accessible.

Here is a general workflow for using Amazon Forecast:

  1. Create a Dataset Group. This is just a logical container for all the datasets you’re going to use to create your predictor.
  2. Import your source datasets. A nice thing here is that Amazon Forecast facilitates the use of different “versions” of your datasets. As you go about feature engineering, you are bound to create different models which will be based on different underlying datasets. This is absolutely crucial for the process of experimentation and iteration.
  3. Create a predictor. This is another way of saying “create a trained model on your source data.”
  4. Create a forecast using the predictor. This is where you actually generate a forecast looking into the future.

To get started, stage your time series data in a CSV file in S3. You have to follow AWS’s naming convention for the column names. You also can optionally use your domain knowledge to enrich the data with “related time series.” Meaning, if you think external factors drive the forecast, you should add those data series, too. You can add multiple complementary time series.

When your datasets are staged, you create a Predictor. A Predictor is just a trained machine learning model. If you choose the “AutoML” option, Amazon will make up to five algorithms compete. It will save the results of all of the models that trained successfully (sometimes an algorithm clashes with the underlying data).

Finally, when your Predictor is done, you can generate a forecast, which will be stored on S3, which can be easily shared with your organization or with any Business Intelligence tool. It’s always a good idea to visualize the results to give them a reality check.

In part two of this article, we’ll dig into best practices for using Amazon Forecast. And if you’re interested in learning even more about transforming your organization to be more data-driven, check out our DataOps Foundation service that helps you transform your data analytics processes.

-Rob Whelan, Practice Director, Data & Analytics

Facebooktwitterlinkedinmailrss

COVID-19 – A Stress Test for The Remote Workforce

The rapid spread of COVID-19 worldwide and growing concerns have escalated quickly due to our interconnected world. This is a challenging and emotional time, and we are grateful for the support of caregivers and healthcare personnel around the globe who are working tirelessly to stem the tide of this brutal virus and support the many people who are in need.  At 2nd Watch, we have instituted similar mitigation efforts as many other companies regarding travel, social distancing and deep cleaning of offices.  We are fortunate the very nature of our cloud business allows us to maintain business continuity for our clients because we are already largely setup for working remotely and telecommuting. We are sharing a recent customer use case in the event businesses are having issues setting up remote operations. Maybe you haven’t felt the extremes of the effects yet, but many have. Imagine this scenario:

Friday 8:00AM

You receive a call and mandate; your Westcoast office has been shut down suddenly and immediately.  All 400 employees are sent home.  Many didn’t even have the opportunity to collect their laptops or other items in their workspace.  First and foremost, you hope all are healthy and safe.  Their wellbeing is paramount.

Now the questions start, “How will I do my job and remain in a safe environment?”

Saturday & Sunday

Over the weekend you deploy desktop-as-a-service (DaaS).  It is secure, compliant, and more importantly, ready to go with the tools the workforce needs.

Monday 8:00AM

Your company is ready to serve the needs of your business and customers.  You can continue under the new “normal” we are experiencing.

2nd Watch is committed to helping our clients during this unprecedented time.  The example above is exactly what we did for one of the largest media companies in the world.  Not only was DaaS implemented in record time, it was done meeting the governance and compliant requirements the business follows.

If you need to discuss how to meet your Business Continuity requirements, let us help you in developing a strategy.  2nd Watch is a Premier Amazon Web Services partner and a Microsoft Azure Gold Partner.  We have extensive experience with these cloud providers in delivering remote worker solutions while leveraging funding opportunities, reducing the capital constraints.

Desktop-as-a-Service (DaaS)

Amazon WorkSpaces and Azure Windows Virtual Desktop solutions are secure ways to enable your workforce to work remotely from home on either personal devices or company owned equipment.  They can be centrally managed and rapidly deployed and configured.  DaaS is the go-to solution to get your new remote workers up and running.

Application Streaming

When there is a limited number of applications that need to be accessed remotely and securely, Amazon AppStream is a solid solution.  Existing applications are deployed quickly with your current authentication platforms.  It works well with mobile and browser-based platforms.

Remote Connectivity

VPN-as-a-Service can be used for rapidly scaling remote connectivity back to your corporate IT assets.  Using industry standards and best practices, you can get your workers access to what they need without the hassle of procuring and provisioning new hardware.

Operational Management

2nd Watch is an audited Managed Service Provider for both AWS and Azure.  We were born in the cloud and have a different perspective on how to manage your infrastructure based on our experience.  Not only can we design and build solutions for the unexpected, we know how to keep your infrastructure running, allowing you to focus on your business.

If we can help you prepare the best course of action for your business, please do not hesitate to include us in your planning.  We are committed to helping all of our businesses survive and thrive.

In the meantime, following are several tips for working remotely. We hope these are helpful, and that you and yours will be safe and healthy.

  • Stick to Your Routine – If you’re used to getting up early, keep doing it, even though you may be working from home. Maintaining a normal schedule will help you do your job to the best of your ability, and it’s the best way to ensure that your colleagues, customers and partners will be able to reach you.
  • Maintain Engagements and Connections – If you have meetings or appointments, do your best to keep them, but do so online. There are an abundance of collaboration and conferencing tools available, including Windows Virtual Desktop and Amazon WorkSpaces. They can help you stay connected and productive, wherever you are.
  • Be Healthy – If you’re not accustomed to working remotely, it can be tempting to feel added pressure to perform, which can lead to longer work hours and anxiety. Given the current level of anxiety most of us are feeling already, it’s more important than ever to take breaks, go for a walk, eat well and exercise.

Ian Willoughby, Chief Architect, 2nd Watch

Facebooktwitterlinkedinmailrss