1-888-317-7920 info@2ndwatch.com

CCPA and the cloud

Since the EU introduced the General Data Protection Regulation (GDPR) in 2018, all eyes have been on the U.S. to see if it will follow suit. While a number of states have enacted data privacy statutes, California’s Consumer Protection Act (CCPA) is the most comprehensive U.S. state law to date. Entities were expected to be in compliance with CCPA as of January 1, 2020.

CCPA compliance requires entities to think about how the regulation will impact their cloud infrastructures and development of cloud-native applications. Specifically, companies must understand where personally identifiable information (PII) and other private data lives, and how to process, validate, complete, and communicate consumer information and consent requests.

What is CCPA and how to ensure compliance

CCPA gives California residents greater privacy rights their data that is collected by companies. It applies to any business with customers in California and that either has gross revenues over $25 million or that acquires personal information from more than 50,000 consumers per year. It also applies to companies that earn more than half their annual revenue selling consumers’ personal information.

In order to ensure compliance, the first thing firms should look at is whether they are collecting PII, and if they are, ensuring they know exactly where it is going. CCPA not only mandates that California consumers have the right to know what PII is being collected, it also states that customers can dictate whether it’s sold or deleted. Further, if a company suffers a security breach, California consumers have the right to sue that company under the state’s data notification law. This increases the potential liability for companies whose security is breached, especially if their security practices do not conform to industry standards.

Regulations regarding data privacy are proliferating and it is imperative that companies set up an infrastructure foundation which help them evolve fluidly with these changes to the legal landscape, as opposed to “frankensteining” their environments to play catch up. The first is data mapping in order to know where all consumer PII lives and, importantly, where California consumer PII lives. This requires geographic segmentation of the data. There are multiple tools, including cloud native ones, that empower companies with PII discovery and mapping. Secondly, organizations will need to have a data deletion mechanism in place and an audit trail for data requests, so that they can prove they have investigated, validated, and adequately responded requests made under CCPA. The validation piece is also crucial – companies must make sure the individual requesting the data is who they say they are.

And thirdly, having an opt-in or out system in place that allows consumers to consent to their data being collected in the first place is essential for any company doing business in California. If the website is targeted at children, there must be a specific opt-in request for any collection of California consumer date. These three steps must be followed with an audit trail that can validate each of them.

The cloud

It’s here that we start to consider the impact on cloud journeys and cloud-native apps, as this is where firms can start to leverage tools that that Amazon or Azure, for example, currently have, but that haven’t been integral for most businesses in a day-to-day context, until now. This includes AI learning tools for data discovery, which will help companies know exactly where PII lives, so that they may efficiently comply with data subject requests.

Likewise, cloud infrastructures should be set up so that firms aren’t playing catch up later on when data privacy and security legislation is enacted elsewhere. For example, encrypt everything, as well as making sure access control permissions are up to date. Organizations must also prevent configuration drift with tools that will automate closing up a security gap or port if one gets opened during development.

For application development teams, it’s vital to follow security best practices, such as CIS benchmarks, NIST standards and the OWASP Top Ten. These teams will be getting the brunt of the workload in terms of developing website opt-out mechanisms, for example, so they must follow best practices and be organized, prepared, and efficient.

The channel and the cloud

For channel partners, there are a number of considerations when it comes to CCPA and the cloud. For one, partners who are in the business of infrastructure consulting should know how the legislation affects their infrastructure and what tools are available to set up a client with an infrastructure that can handle the requests CCPA mandates.

This means having data discovery tools in place, which can be accomplished with both cloud native versions and third party software. Also, making sure notification mechanisms are in place, such as email, or if you’re on Amazon, SNS (Simple Notification Service). Notification mechanisms will help automate responding to data subject requests. Additionally, logging must be enabled to establish an audit trail. Consistent resource tagging and establishing global tagging policies is integral to data mapping and quickly finding data. There’s a lot from an infrastructure perspective that can be done, so firms should familiarize themselves with tools that can facilitate CCPA compliance that may have never been used in this fashion, or indeed at all.

Ultimately, when it comes to CCPA, don’t sleep on it. GDPR went into effect less than two years ago, and already we have seen huge fines doled out to the likes of British Airways and Google for compliance failures. The EU has been aggressive about ensuring compliance, and California is likely to follow the same game. They know that in order to give CCPA any teeth, they have to make sure that they prosecute it.

If you’re interested in learning more about how privacy laws might affect cloud development, watch our “CCPA: State Privacy Law Effects on Cloud Development” webinar on-demand, at your convenience.

– Victoria Geronimo, Product Manager – Security & Compliance

Facebooktwitterlinkedinmailrss

Amazon Forecast: Best Practices

In part one of this article, we offered an overview of Amazon Forecast and how to use it. In part two, we get into Amazon Forecast best practices:

Know your business goal

In our data and analytics practice, business value comes first. We want to know and clarify use cases before we talk about technology. Using amazon Forecast is no different. When creating a forecast, do you want to make sure you always have enough inventory on hand? Or do you want to make sure that all your inventory gets used all the time? This will drive which “quartile” you look at.

Each quartile – the defaults are 10%, 50%, and 90% – is important for its own reasons and should be looked at to give a range. What is the 50% quartile? The forecast at this quartile has a 50-50 chance of being right. The real number has a 50% chance of being higher and a 50% chance of being lower than the actual value. The forecast at the 90% quartile has a 90% chance of being higher than the actual value, while the forecast at the 10% quartile has only a 10% chance of being higher. So, if you want to make sure you sell all your inventory, use the 10% quartile forecast.

Use related time series

Amazon has made Forecast so easy to use with related time series, you really have nothing to lose to make your forecast more robust. All you have to do is make the time series time units the same as your target time series.

One way to create a related dataset is to use categorical or binary data whose future values are already known – for example, whether the future time is on a weekend or a holiday or there is a concert playing – anything that is on a schedule that you can rely on.

Even if you don’t know if something will happen, you can create multiple forecasts where you vary the future values. For example, if you want to forecast attendance at a baseball game this Sunday, and you want to model the impact of weather, you could create a feature is_raining and try one forecast with “yes, it’s raining” and another with “no, it’s not raining.”

Look at a range of forecasted values, not a singular forecasted value

Don’t expect the numbers to be precise. One of the biggest values from a forecast is knowing what the likely range of actual values will be. Then, take some time to analyze what drives that range. Can it be made smaller (more accurate) with more related data? If so, can you control any of that related data?

Visualize the results

Show historical and forecast values on one chart. This will give you a sense of how the forecast is trending. You can backfill the chart with actuals as they come in, so you can learn more about your forecast’s accuracy.

Choose a “medium term” time horizon

Your time horizon – how far in the future your forecast looks – is either 500 timesteps or ⅓ of your time series data, whichever is smaller. We recommend choosing up to a 10% horizon for starters. This will give you enough forward-looking forecasts to evaluate the usefulness of your results without taking too long.

Save your data prep code

Save the code you use to stage your data for the forecast for the future. Because you will be doing this again, you don’t want to repeat yourself. An efficient way to do this is to use PySpark code inside a Sagemaker notebook. If you end up using your forecast in production, you will eventually place that code into a Glue ETL pipeline (using PySpark), so it is best to just use PySpark out of the box.

Another advantage of using PySpark is that the utilities to load and drop csv-formatted data to/from S3 are dead simple. You will be using CSV for Forecasting work.

Interpret the results!

The guide to interpret results is here, but admittedly it is a little dense if you are not a statistician. One easy metric to look at, especially if you use multiple algorithms, is Root Mean Squared Error (RMSE). You want this as low as possible, and, in fact, Amazon will choose its winning algorithm mostly on this value.

It will take some time

How long will it take? If you do select AutoML, expect model training to take a while – at least 20 minutes for even the smallest datasets. If your dataset is large, it can take an hour or several hours. The same is true when you generate the actual forecast. So, start it in the beginning of the day so you can work with it before lunch, or near the end of your day so you can look at it in the morning.

Data prep details (for your data engineer)

  • Match the ‘forecast frequency’ to the frequency of your observation timestamps.
  • Set the demand datatype to a float prior to import (it might be an integer).
  • Get comfortable with `striptime` and `strftime` – you have only two options for timestamp format.
  • Assume all data are from the same time zone. If they are not, make them that way. Use python datetime methods.
  • Split out a validation set like this: https://github.com/aws-samples/amazon-forecast-samples/blob/master/notebooks/1.Getting_Data_Ready.ipynb
  • If using pandas dataframes, do not use the index when writing to csv.

Conclusion

If you’re ever asked to produce a forecast or predict some number in the future, you now have a robust method at your fingertips to get there. With Amazon Forecast, you have access to Amazon.com’s optimized algorithms for time series forecasting. If you can get your target data into CSV format, then you can use a forecast. Before you start, have a business goal in mind – it is essential to think about ranges of possibilities rather than a discrete number. And be sure to keep in mind our best practices for creating a forecast, such as using a “medium term” time horizon, visualizing the results, and saving your data preparation code.

If you’re ready to make better, data-driven decisions, trust your dashboards and reports, confidently bring in new sources for enhanced analysis, create a culture of DataOps, and become AI-ready, contact us to schedule a demo of our DataOps Foundation.

-Rob Whelan, Practice Director, Data & Analytics

Facebooktwitterlinkedinmailrss

Fully-Managed DevOps – Is It Possible?

If you’re in a development or operations role, you probably gawked at this title. The truth is, having some other company manage your “DevOps” is an insult to the term. However, bear with me while I put out this scenario:

  • What if you don’t have a team that can manage all your tools that enable you to adopt DevOps methods?
  • Why should you have to spend time managing the tools you use, instead of developing and operating your application?
  • What if your team isn’t ready for this big cultural, process, and tooling change or disagrees on where to begin?

These are key reasons to consider adopting a DevOps platform managed by experts.

Just a Quick Definition:

To bring you along my thought process, let’s first agree on what DevOps IS. DevOps, a term built by combining the words Development and Operations, is a set of cultural values and organizational practices implemented with the intent to improve business outcomes. DevOps methods were initially formed to bridge the gap between Development and Operations so that teams could increase speed to delivery as well as quality of product at the same time. The focus of DevOps is to increase collaboration and feedback between Business Stakeholders, Development, QA, IT or Cloud Operations, and Security to build better products or services.

When companies attempt to adopt DevOps practices, they often think of tooling first. However, a true DevOps transformation includes an evolution of your company culture, processes, collaboration, measurement systems, organizational structure, and automation and tooling — in short, things that cannot be accomplished through automation alone.

Why DevOps?
Adopting DevOps practices can be a gamechanger in your business if implemented correctly. Some of the benefits include:

  • Increase Operational Efficiencies – Simplify the software development toolchain and minimize re-work to reduce total cost of ownership.
  • Deliver Better Products Faster – Accelerate the software delivery process to quickly deliver value to your customers.
  • Reduce Security and Compliance Risk – Simplify processes to comply with internal controls and industry regulations without compromising speed.
  • Improve Product Quality, Reliability, and Performance – Limit context switching, reduce failures, and decrease MTTR while improving customer experience.

The basic goal here is to create and enable a culture of continuous improvement.

DevOps Is Not All Sunshine and Roses:

Despite the promise of DevOps, teams still struggle due to conflicting priorities and opposing goals, lackluster measurement systems, lack of communication or collaborative culture, technology sprawl creating unreliable systems, skill shortage, security bottlenecks, rework slowing progress…you get the picture. Even after attempting to solve these problems, many large enterprises face setbacks including:

  • Reliability: Their existing DevOps Toolchain is brittle, complex, and expensive to maintain.​​
  • Speed: Developers are slowed down by bottlenecks, hand-offs, and re-work.​​
  • Security: Security is slowing down their release cycle, but they still need to make sure they scan the code for licensing and vulnerabilities issues before it goes out. ​​
  • Complexity: DevOps is complex and an ongoing process. They don’t currently have the internal skillset to start or continue their progress. ​
  • Enterprise Ready: SaaS DevOps offerings do not enable them to have privacy or features they require for enterprise security and management.

Enter Managed DevOps:

Managed DevOps removes much of this complexity by providing you with a proven framework for success beginning with an assessment that sets the go-forward strategy, working on team upskilling, implementing end-to-end tooling, and then finally providing ongoing management and coaching.

If you have these symptoms, Managed DevOps is the cure:

  • Non-Existent or Brittle Pipeline
  • Tools are a Time Suck; No time to focus on application features
  • You know change is necessary, but your team disagrees on where to begin

Because Managed DevOps helps bring your teams along the change curve by providing the key upskilling and support, plus a proven tool-chain, you can kick off immediately without spending months debating tooling or process.

If you’re ready to remove the painful complexity and start to build, test, and deploy applications in the cloud in a continuous and automated way, talk with our DevOps experts about implementing a Managed DevOps solution.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

How to Use Amazon Forecast for your Business

How to use Amazon Forecast: What Is it Good For?

How many times have you been asked to predict revenue for next month or next quarter? Do you mostly rely on your gut? Have you ever been asked to support your numbers? Cue sweaty palms frantically churning out spreadsheets.

Maybe you’ve suffered from the supply chain “bullwhip” effect: you order too much inventory, which makes your suppliers hustle, only to deliver a glut of product that you won’t need to replace for a long time, which makes your suppliers sit idle.

Wouldn’t it be nice to plan for your supply chain as tightly as Amazon.com does? With Amazon Forecast, you can do exactly that. In part one of this two-part article, I’ll provide an overview of the Amazon Forecast service and how to get started. Part two of the article will focus on best practices for using Amazon Forecast.

The backstory

Amazon knows a thing or two about inventory planning, given its intense focus on operations. Over the years, it has used multiple algorithms for accurate forecasting. It even fine-tuned them to run in an optimized way on its cloud compute instances. Forecasting demand is important, if nothing else to get a “confidence interval” – a range where it’s fairly certain reality will fall, say, 80% of the time.

In true Amazon Web Services fashion, Amazon decided to provide its forecasting service for sale in Amazon Forecast, a managed service that takes your time series data in CSV format and spits out a forecast into the future. It gives you a customizable confidence interval that you can set to 95%, 90%, 80%, or whatever percentage you need. And, you can re-use and re-train the model with actuals as they come in.

When you use the service, you can tell it to run up to five different state-of-the-art algorithms and pick a winner. This saves you the time of deliberating over which algorithm to use.

The best part is that you can make the forecast more robust by adding in “related” time series – any data that you think is correlated to your forecast. For example, you might be predicting electricity demand based on macro scales such as season, but also on a micro level such as whether or not it rained that day.

How to use

Amazon Forecast is considered a serverless service. You don’t have to manage any compute instances to use it. Since it is serverless, you can create multiple scenarios simultaneously – up to three at once. There is no reason to do this in series; you can come up with three scenarios and fire them off all at once. Additionally, it is low-cost , so it is worth trying and experimenting with often. As is generally the case with AWS, you end up paying mostly for the underlying compute and storage, rather than any major premium for using the service. Like any other machine learning task, you have a huge advantage if you have invested in keeping your data orderly and accessible.

Here is a general workflow for using Amazon Forecast:

  1. Create a Dataset Group. This is just a logical container for all the datasets you’re going to use to create your predictor.
  2. Import your source datasets. A nice thing here is that Amazon Forecast facilitates the use of different “versions” of your datasets. As you go about feature engineering, you are bound to create different models which will be based on different underlying datasets. This is absolutely crucial for the process of experimentation and iteration.
  3. Create a predictor. This is another way of saying “create a trained model on your source data.”
  4. Create a forecast using the predictor. This is where you actually generate a forecast looking into the future.

To get started, stage your time series data in a CSV file in S3. You have to follow AWS’s naming convention for the column names. You also can optionally use your domain knowledge to enrich the data with “related time series.” Meaning, if you think external factors drive the forecast, you should add those data series, too. You can add multiple complementary time series.

When your datasets are staged, you create a Predictor. A Predictor is just a trained machine learning model. If you choose the “AutoML” option, Amazon will make up to five algorithms compete. It will save the results of all of the models that trained successfully (sometimes an algorithm clashes with the underlying data).

Finally, when your Predictor is done, you can generate a forecast, which will be stored on S3, which can be easily shared with your organization or with any Business Intelligence tool. It’s always a good idea to visualize the results to give them a reality check.

In part two of this article, we’ll dig into best practices for using Amazon Forecast. And if you’re interested in learning even more about transforming your organization to be more data-driven, check out our DataOps Foundation service that helps you transform your data analytics processes.

-Rob Whelan, Practice Director, Data & Analytics

Facebooktwitterlinkedinmailrss

COVID-19 – A Stress Test for The Remote Workforce

The rapid spread of COVID-19 worldwide and growing concerns have escalated quickly due to our interconnected world. This is a challenging and emotional time, and we are grateful for the support of caregivers and healthcare personnel around the globe who are working tirelessly to stem the tide of this brutal virus and support the many people who are in need.  At 2nd Watch, we have instituted similar mitigation efforts as many other companies regarding travel, social distancing and deep cleaning of offices.  We are fortunate the very nature of our cloud business allows us to maintain business continuity for our clients because we are already largely setup for working remotely and telecommuting. We are sharing a recent customer use case in the event businesses are having issues setting up remote operations. Maybe you haven’t felt the extremes of the effects yet, but many have. Imagine this scenario:

Friday 8:00AM

You receive a call and mandate; your Westcoast office has been shut down suddenly and immediately.  All 400 employees are sent home.  Many didn’t even have the opportunity to collect their laptops or other items in their workspace.  First and foremost, you hope all are healthy and safe.  Their wellbeing is paramount.

Now the questions start, “How will I do my job and remain in a safe environment?”

Saturday & Sunday

Over the weekend you deploy desktop-as-a-service (DaaS).  It is secure, compliant, and more importantly, ready to go with the tools the workforce needs.

Monday 8:00AM

Your company is ready to serve the needs of your business and customers.  You can continue under the new “normal” we are experiencing.

2nd Watch is committed to helping our clients during this unprecedented time.  The example above is exactly what we did for one of the largest media companies in the world.  Not only was DaaS implemented in record time, it was done meeting the governance and compliant requirements the business follows.

If you need to discuss how to meet your Business Continuity requirements, let us help you in developing a strategy.  2nd Watch is a Premier Amazon Web Services partner and a Microsoft Azure Gold Partner.  We have extensive experience with these cloud providers in delivering remote worker solutions while leveraging funding opportunities, reducing the capital constraints.

Desktop-as-a-Service (DaaS)

Amazon WorkSpaces and Azure Windows Virtual Desktop solutions are secure ways to enable your workforce to work remotely from home on either personal devices or company owned equipment.  They can be centrally managed and rapidly deployed and configured.  DaaS is the go-to solution to get your new remote workers up and running.

Application Streaming

When there is a limited number of applications that need to be accessed remotely and securely, Amazon AppStream is a solid solution.  Existing applications are deployed quickly with your current authentication platforms.  It works well with mobile and browser-based platforms.

Remote Connectivity

VPN-as-a-Service can be used for rapidly scaling remote connectivity back to your corporate IT assets.  Using industry standards and best practices, you can get your workers access to what they need without the hassle of procuring and provisioning new hardware.

Operational Management

2nd Watch is an audited Managed Service Provider for both AWS and Azure.  We were born in the cloud and have a different perspective on how to manage your infrastructure based on our experience.  Not only can we design and build solutions for the unexpected, we know how to keep your infrastructure running, allowing you to focus on your business.

If we can help you prepare the best course of action for your business, please do not hesitate to include us in your planning.  We are committed to helping all of our businesses survive and thrive.

In the meantime, following are several tips for working remotely. We hope these are helpful, and that you and yours will be safe and healthy.

  • Stick to Your Routine – If you’re used to getting up early, keep doing it, even though you may be working from home. Maintaining a normal schedule will help you do your job to the best of your ability, and it’s the best way to ensure that your colleagues, customers and partners will be able to reach you.
  • Maintain Engagements and Connections – If you have meetings or appointments, do your best to keep them, but do so online. There are an abundance of collaboration and conferencing tools available, including Windows Virtual Desktop and Amazon WorkSpaces. They can help you stay connected and productive, wherever you are.
  • Be Healthy – If you’re not accustomed to working remotely, it can be tempting to feel added pressure to perform, which can lead to longer work hours and anxiety. Given the current level of anxiety most of us are feeling already, it’s more important than ever to take breaks, go for a walk, eat well and exercise.

Ian Willoughby, Chief Architect, 2nd Watch

Facebooktwitterlinkedinmailrss

DataOps: Get your data out of silos and into the middle of the action.

Are you facing pressure to make better decisions, faster? Are you uneasy about making too many gut-level business decisions? Are you being asked to have a data strategy from above and wondering how to compete in a data-driven world?

You are not alone. These are common themes emerging in today’s digital economy. Customers of all kinds – from consumers to enterprise businesses – have greater and greater choices than ever before. That means your customers are demanding more service, faster, and at a higher quality. How you decide to meet these needs is becoming very complex. You need to choose among many competing options. Increasingly, making these decisions by trusting your gut is a recipe for disaster!

These difficult decisions are not made any easier with the rise of Software as a Service (SaaS). While it’s easy to get up and going with SaaS offerings to handle business productivity needs, with every new SaaS offering you use, you end up silo-ing your data even more. Every department, every business function, has multiple data silos that make holistic business analysis an uphill climb. How can you tie together customer satisfaction and operations data, if the data is in two different systems?

Can you find the data you need? Once you find it, do you trust it? It just shouldn’t be this hard to make business decisions!

We know this is a common problem, because we hear it over and over again from our customers. We continue to hear about this problem, despite the relative maturity of “big data” systems. If big data has been a thing for at least two decades, why are we still struggling to make sense of it all? Our diagnosis is pretty simple:

  1. Data projects that lack a business goal will fail, and most data projects lack a clear business goal, such as “increasing customer satisfaction.”
  2. It’s hard to find people to do the hard work of connecting systems and pulling data out.

So, despite fantastic big data ecosystems being widely available, if you lack a clear business objective and you can’t assign people to roll up their sleeves and move data to where it needs to be, then unfortunately your data initiative will die on the vine.

Our solution to this is very straightforward:

  1. We start with the business goal and never put it on the back burner. Our consultants are trained to listen for and capture business objectives from your team (and people around your team) and hang onto them tightly, while allowing flexibility when it comes to the implementation details. This is very rare in cloud consulting. Most cloud consultancies miss the business goals and skip straight to engineering. We think this is unacceptable and have seen it lead to purposeless, cash-hemorrhaging projects.
  2. We then rapidly get to work and implement our best-practices DataOps solution. It’s pre-built, uses 100% serverless AWS offerings, and is battle-tested over dozens of successful deployments and years of incorporating AWS best practices. Since it is serverless, scaling your DataOps foundation to dozens or hundreds of data sources is painless.
  3. Then, we connect your first several data sources, such as Salesforce, or logs, or customer data, or whatever we together have identified will support your business use case. This is the hard work of rolling up your sleeves, and we have the people to do it.
  4. Within the first two weeks, most customers are analyzing data from multiple sources in a single pane of glass.
  5. Finally, we make your analytics production-ready and help you share the good news around your organization.

These are the benefits that our customers have told us they have received.

  1. You can make better, data-driven decisions. Since we start and end the engagement with your business focus in mind, you are able to make better, data-informed decisions. Where before you were trusting your gut, now you have real, relevant, current data to support your decision making. You’re not driving blind.
  2. You can trust your dashboards and reports. Since we have implemented a best-practices Data Catalog, you have a crystal-clear picture of how your data got to its end state. You are not questioning “is this data real?” because you have clear traceability of data from source to metrics. If you can’t trust your data when you try to act on it, what’s the point?
  3. Your analysis gets even better with yet more data sources. Now that you have a central data lake with easy-to-replicate patterns for bringing in new data, you can make your analyses even richer by adding yet more sources. Many of our customers enrich their data with a wide variety of internal sources, and even external sources like weather and macroeconomic data, to find new correlations and trends that were not possible before.
  4. You feed a culture of DataOps. Word will get around that your team has the ability to drastically simplify data access and analysis because our DataOps Foundation comes with commonsense access rules right out of the box. It is not a threat to give access to the right people – it will help your business operate. This tends to have a flywheel effect. Other departments get excited and want to add their data; analyses get better and richer; then even more people want to bring in their data.
  5. You are now AI-ready. If all the analytical benefits were not enough, you are now also ready for AI and machine learning (ML). It’s just not possible to perform any kind of AI with messy data. With our DataOps Solution, you have solved two problems at once – you have action-ready business data, and you have cleared the path for repeatable AI projects.

You are not alone if you still can’t get the data you need. If your data still feels invisible to you, and you don’t think it should be so hard to crunch data for business outcomes, then you should know that there is a better way. Our DataOps Solution puts your business goals front and center. Our straightforward engagement has you centralizing and analyzing data, in the cloud, securely, within a week or two. Then, you can add more sources to your heart’s content and enjoy the benefits of being data-driven and AI-ready in today’s demanding economy.

To get started, contact us to book a discussion and a demo.

-Rob Whelan, Practice Manager, Data Engineering & Analytics

Facebooktwitterlinkedinmailrss

How Cherwell Software Improved Customer Experience with AMS

2nd Watch helped Cherwell Software onboard to AWS Managed Services (AMS) to provide a holistic approach to SaaS architecture and improve their customer experience. When managing infrastructure was taking away from Cherwell’s product development processes, 2nd Watch served as a consulting partner in developing their strategy and engagement to onboarding to AMS quickly, enabling them to provide great service management experience to its customers.

Facebooktwitterlinkedinmailrss

10 Tips for Your First DevOpsDays NYC

This year I have the privilege of being part of the organizing committee for DevOpsDays New York City. It’s been an exciting (and busy) journey so far though I’m learning a lot about how to put together a 1 track conference with over 600 attendees, 20+ sponsors, and 20 speakers in NYC. There’s a lot that goes into organizing a conference; sponsors, CFPs, agendas, signage, registrations, volunteers, marketing, payments, vendors, and of course, FOOD!

As this is only my 2nd DevOpsDays conference so I’m not yet an expert though I do know a thing or two about speaking at and attending tech conferences. So, here are my tips for any first-timers that plan on heading to DevOpsDays New York City on March 3 – 4, 2020:

1 – Find the speakers and organizers on Twitter and LinkedIn and connect with them – This will help you grow your network while getting prepped for what you may see at the conference. The cool thing about DevOpsDays is that it is a 1-track conference, meaning you won’t miss a speaker if you hang out in the main hall. Many other conferences have different tracks requiring you to make a decision on what to see and what to miss. Another great thing to keep in mind is that the conference will publish the talks on YouTube after the event allowing you to go back and relook if you miss something. Here are the Speakers, the Program, and the Organizers for DevOpsDays NYC 2020:

If you have time, catch my colleague, Victoria Geronimo, presenting DevSecOps is a Misnomer and get some insight into exactly where Security fits in the “DevSecOps” pipeline and culture and on specific challenges companies face, and the things they do to address those challenges.

2 – Bring your resume, a link to your resume, or a business card if you’re looking for something new. There are tons of sponsors you can talk to that are looking to hire and there are no ‘badge scanners’ at this conference so you’ll have to provide them your contact info in a quick/easy way. Remember if you sign up at a booth with your email address, they’re likely to follow up with a few sales emails. So, I always like to have a specific email address for events so that I can sort through these. Gmail helps with this by allowing you to add filters too. You can easily print personal business cards for cheap using a service like Vistaprint. Include your name, title or role (developer, engineer, ops guru, etc.), email and phone number. Add a link to your resume / Twitter / LinkedIn / GitHub as well so people can get connect with you.

Here is a list of Sponsors that you can check out now. Do some quick research on what they do and which ones you want to stop by and chat with. It’s good to come with a checklist of things you want to do/accomplish so that you don’t get distracted by all the activity and the other 599 people at the conference: https://devopsdays.org/events/2020-new-york-city/sponsor.

3 – Plan to meet at least 3 people at the event, get connected with them on social media, and follow up for coffee or a quick chat. This is my networking secret that helps me build real contacts instead of just people I said hello to at a conference. We all have something to share and give so don’t feel like asking for 15 minutes of their time is too much. You can easily connect on LinkedIN at the conference and then send them a note that night saying

“It was nice to meet you at DevOpsDays NYC! I’d love to chat about [opportunities, your experience, your company, how you got to where you are] if you have 15 mins next week to catch up. How does [insert day/time] work for you”

4 – Get your elevator pitch down. This includes who you are (name and role), where you’re from/company/school, and what you’re looking to get out of the event (learn, meet industry people, be a future speaker, etc.). You may also want to have a quick sentence on what you can offer others at the event like connections, directions to the nearest good coffee, or just a nice conversation.

5 – Open Spaces are fun but may be a little intimidating for many newcomers and introverts. It’s good to know how they work before being forced into a circle and share your thoughts. DevOpsDays puts out a quick guide here for organizers and below is my 6 step open space guide for attendees:

  • Topic Submission – People submit topics they want to talk about or a discussion they want to lead. You can submit a topic too! Just put it on the board when the organizers ask.
  • Everyone Votes for Topics – This is usually facilitated via an online app that you can download. Sometimes stickers are handed out and you will place a sticker next to the topic you’re interested in talking about. Some conferences give 3 votes, others give 1. Choose what open discussion you want to be included in or listen to.
  • Topic Organization – The event staff will organize open space topics by vote into break out rooms. This is done usually during the lunch time so after lunch you can check out where the open spaces you voted for are taking place. At DevOpsDays NYC 2020, we’ll likely have 3 open spaces per day so you will have 6 opportunities across 2 days to participate in an open talk.
  • Find Your Room – The open spaces will be held on all different floors too so keep an eye out for the room number and signs or ask for help getting to your open space. I like to take a picture of the board so I don’t forget the room numbers.
  • Attend the Open Space – When you enter the room there may be someone in the center introducing what they wanted to talk about. Introduce yourself, get involved, help circle up the chairs, sit down in the circle or near the organizer to get a good view and try to say at least 1 thing during the open space, even if it’s a question to others. This is your opportunity to share what you know but also a great opportunity to learn. It is okay to leave an open space if it’s not interesting to you. Just walk out of the room. No one will be offended they’re too busy talking anyway. ;0)
  • Rinse and Repeat – Attend as many open spaces as you’d like and try to take advantage of the time to learn something new. Don’t forget to find a way to continue the conversation if you found it interesting. A great way is to connect with the topic submitter via Twitter or LinkedIN and follow up.

6 – Make time to attend the evening social. This event is in the same venue so you don’t have to go far and food/drink will be served (so you won’t want to miss it). DevOpsDays NYC Evening Social is on March 3rd at 5 pm right after the last session on day 1. It’s a great way to meet the speakers, sponsors, attendees, and organizers and grow your network quickly while in a relaxed environment.

7 – Wear something that makes you comfortable but looks semi professional in case you’re looking for a job. You could be meeting with recruiters, hiring managers, etc. Also remember this is a DevOps conference so most of us aren’t wearing suits and ties and dresses. I’ll be wearing jeans, sneakers, and an organizer t-shirt, and bringing a sweater You do you. Be authentic but neat.

8 – Use Social Media. A great way to gain new Twitter followers is to live tweet your thoughts during a talk and tag the speaker and the event, @DevOpsDaysNYC. Share what you’ve learned, what they said that really impressed you, and your followers will start flowing in. It’s also great to post on LinkedIN to share your experience through pictures and thoughts. Posting on LinkedIN helps you come up in search results more often for potential employers. Use the hashtags #devops #devopsdays #devopsdaysnyc to make sure your messages get noticed. Don’t forget to follow these hashtags and retweet/like other peoples posts too.

9 – Read and Abide by the Code of Conduct. DevOpsDays has a clear code of conduct providing a harassment-free conference experience for everyone. Remember to read it prior to the event and also report any harassment to an organizer immediately. We will support you. https://devopsdays.org/events/2020-new-york-city/conduct

10 – Take a Break. This is my most important tip for all conferences. You do not have to see every speaker, participate in every activity, or even stay the whole time if your mind/body isn’t into it. It’s not easy being around 600 people in close quarters for 2 days so remember, taking a break in one of the Open Space rooms, taking a quick walk around Central Park, or grabbing a coffee outside of the venue is perfectly fine.

2nd Watch is also sponsoring the event, so make sure to look for us and stop by for a chat!

-Stefana Muller, Sr Product Manager – DevOps & Migration

Facebooktwitterlinkedinmailrss

Protection from Immediate Threats with an AWS Security Rapid Review

Security assessments are a necessity for cloud security, governance, and compliance. Ideally, an assessment will result in a prioritized list of security and compliance gaps within your cloud environment, the context (or standards) for these gaps, and how to fix them. In reality, however, security assessments themselves can have their own vulnerabilities, particularly around scoping and recommendations.

Organizations that do not have in-house security expertise may have trouble defining what they are actually seeking to get out of the assessment. Projects can be ill-scoped, and recommendations may not actually make sense given your security posture and budget. Additionally, many remediation recommendations may just be band-aid solutions and not long-term fixes that will stop the vulnerability from reoccurring. By the end of the engagement, you may end up with a couple of good recommendations, a lot of useless ones, and a month of wasted time and resources.

Enter our AWS Security Rapid Review. This 1-2 week engagement is designed to provide you with a quick turnaround of actionable remediation recommendations. It is scalable from a small sample of accounts to a few hundred. Benefits include:

• Checking your AWS environment against industry-standard benchmarks and 2nd Watch best practices
• List of vulnerabilities
• Threat prioritization
• Prescriptive, actionable remediation recommendations
• Consultation with a 2nd Watch security expert on the underlying systemic issues causing noted vulnerabilities
• 1-2 week turnaround time

This assessment gives you the immediate ability to remediate vulnerabilities as well as the context for why these vulnerabilities are occurring in the first place. You have control over whether you want to just remediate findings or take it a step further and lay down a robust security foundation.

To learn more about our AWS Security Rapid Review, download our datasheet.

-Victoria Geronimo, Product Manager, Security & Compliance

Facebooktwitterlinkedinmailrss

Gartner Report: Don’t Fail Fast in Production; Embed Monitoring Earlier in Your DevOps Cycle

Gartner says, “DevOps initiatives improve speed and agility, but monitoring often starts during production. To provide superior customer experiences, infrastructure and operations leaders need to build instrumentation into the preproduction phase, tracking metrics on availability, performance and service health.”

“How can I&O leaders leverage monitoring practices to continually improve DevOps deployments and performance against business key performance indicators (KPIs)? This research identifies monitoring practices that I&O leaders should embed in the preproduction phase of DevOps cycles to address needs across application development and release management.” says the report.

Access the Gartner report to learn more

Gartner Don’t Fail Fast in Production; Embed Monitoring Earlier in Your DevOps Cycle, 16 July 2019, Pankaj Prasad, George Spafford, Charley Rich

Facebooktwitterlinkedinmailrss