Amazon Web Services (AWS) Outage Constitutes Multi-Region Infrastructure

When Amazon’s cloud computing platform, Amazon Web Services (AWS), suffered an outage this past Tuesday (December 7, 2021), the magnitude of the event was felt globally. What happened, and how can your business learn from this significant outage?

Why was there an AWS outage?

Reported issues within the AWS infrastructure began around 12:00 ET/17:00 GMT on Dec. 7, according to data from real-time outage monitoring service DownDetector.

Amazon reported that the “US-East-1” region went down in North Virginia on Tuesday, which disrupted Amazon’s own applications and multiple third-party services that also rely on AWS. The issue was an “impairment of several network devices” that resulted in several API errors and ultimately, impacted many critical AWS services.

What were the effects of the AWS outage?

The effects of the AWS outage were massive because any problem affecting Amazon impacts hundreds of millions of end-users. AWS constitutes 41% of the global cloud-computing business, and many of the largest companies in the world are dependent on AWS’s cloud computing services. These businesses rent computing, storage, and network capabilities from AWS, which means the outage prevented end-users ‘ access to a variety of sites and apps across the Internet.

The major websites and apps that suffered from the outage are ones we turn to on a daily basis: Xfinity, Venmo, Google, and Disney+, just to name a few.

On Tuesday morning, users were reporting that they couldn’t log on to a variety of vital accounts. Most of us were going through our normal daily routine of checking the news, our financial accounts, or our Amazon orders, only to frustratingly realize that we couldn’t do so. 

With so many large organizations relying on AWS, when the outage occurred, it felt like the entire Internet went down. 

Benefits of a High Availability Multi-Region Cloud Application Architecture

Even though the outage was a major headache, it serves as an important lesson for those who are relying on a cloud-based infrastructure. As they say, you should learn from mistakes.

So how can your business mitigate, or even avoid, the effects of a major failure within your cloud provider?

At 2nd Watch, we are in favor of a high availability multi-region cloud approach. We advise our clients to build out multi-region application architecture not only because it will support your mission-critical services during an outage, but also because it will make your applications more resilient and improve your end-user experiences by keeping latencies low for a distributed

user base. Below is how we think about a multi-region cloud approach and why we believe it is a strong strategy

1. Increase your Fault Tolerance

Fault tolerance is the ability of a system to endure some kind of failure and continue to operate properly. 

Unfortunately, things happen that are beyond our control (i.e. natural disasters) or things slip through the cracks (i.e. human error), which can impact a data center, an availability zone, or an entire region. However, just because a failure happens doesn’t mean an outage has to happen.

By architecting a multi-region application structure, if there is a regional failure similar to AWS’s east region failure, your company can avoid a complete outage. Having a multi-region architecture grants your business the redundancy required to increase availability and resiliency, ensure business continuity and support disaster recovery plans.

2. Lower latency requirements for your worldwide customer base

The benefits of a multi-region approach goes beyond disaster recovery and business continuity. By adopting a multi-region application architecture, your company can deliver low latency by keeping data closer to all of your users, even those who are across the globe.

In an increasingly impatient world, keeping latency low is vital for a good user experience, and the only way to maintain low latency is keeping your users close to the data.

3. Comply with Data Privacy Laws & Regulations

“Are you GDPR compliant?” is a question you probably hear frequently. Hopefully your business is, and you want to remain that way. With a multi-region architecture, you can ensure that you are storing data within the legal boundaries. Also, with signs that there will be more regulations each year, you will stay a step ahead with data compliance if you utilize a multi-region approach.

How Can I Implement a Multi-Region Infrastructure Deployment Solution?

A multi-region cloud approach is a proactive way to alleviate potential headaches and grow your business, but without guidance, it can seem daunting in terms of adoption strategy, platform selection, and cost modeling. 

2nd Watch helps you mitigate the risks of potential public cloud outages and deploy a multi-region cloud infrastructure. Through our Cloud Advisory Services, we serve as your trusted advisor for answering key questions, defining strategy, managing change, and providing impartial advice for a wide range of organizational, process, and technical issues critical to successful cloud modernization.

Contact us today to discuss a multi-region application architecture for your business needs!

What to Expect at AWS re:Invent 2021

Welcome back friends! AWS re:Invent turns 10 this year and once again 2nd Watch is here to help you navigate it like a pro. As we all know now, AWS re:Invent 2021 is back in person in Las Vegas. One addition this year, Amazon Web Services is also offering a virtual event option… well, kind of…. As it currently stands, only the keynotes and leadership sessions will be live streamed for the virtual attendees. Breakout sessions will only be live for in person attendees, but will be available on-demand after the event.

For the rest of this blog I will try to focus on my thoughts and limit my regurgitation of all the information that you can get from the AWS re:Invent website, such as the AWS Code of Conduct, but I think it’s worth noting what I think are some key highlights that you should know. Oh, and one more thing. I have added a small easter egg to this year’s blog. If you can find a Stan Lee reference, shoot me an email: dustin@2ndwatch.com and call it out. One winner will be picked at random and sent a $25 Amazon gift card. Now let’s get to it.

Some important things to note this year

Now that AWS re:Invent is (mostly) back in person, AWS is implementing proper health measures to prevent the spread of COVID. Make sure to review the health guidelines published by AWS. (https://reinvent.awsevents.com/health-measures/). Here is the summary for those that don’t enjoy more eye exercise than necessary. Refer to aforementioned link for more details and FAQ’s if you do.

  • All badge holders attending in person must be fully vaccinated for COVID-19 (2 weeks after final shot) which means you must provide a record of vaccination in order to receive your badge. AWS makes it clear that there are no ifs, ands or buts on this. No vax proof, no badge. ‘Nuff said!
  • Masks will be required for everyone at the event. Real ones. Unfortunately face lingerie and train robber disguises will not count.

Keynotes at Glance

This year’s keynotes give you the best of both worlds with both a live option for in person attendees and on-demand viewing option for virtual attendees. The 2021 keynotes include:

  • Adam Selipsky, AWS CEO
  • Peter DeSantis, Senior Vice President, Utility Computing and Apps
  • Wener Vogels, CTO, Amazon.com
  • Swami Sivasubramanian, Vice President, Amazon Machine Learning
  • Global Partner Summit presented by Doug Yeum, Head of AWS Partner Organization, Sandy Carter, Vice President, Worldwide Public Sector Partners and Programs, and Stephan Orban, General Manager of AWS Marketplace and Control Services

2nd Watch Tips n’ Tricks

Over the last 9 years we have watched the AWS re:Invent conference morph into a goliath of an event. Through our tenure there we have picked up an abundance of tips n’ tricks to help us navigate the waters. Some of these you may have seen from my previous blogs, but they still hold strong value, so I have decided to include them. I have also added a couple new gems to the list.

  • App for the win – I cannot stress this one enough. Download and use the AWS Events app. This will help you manage your time as well as navigate around and between the venues.
  • Embrace your extravert Consider signing up for the Builder Sessions, Workshops, and Chalk Talks instead of just Breakout sessions. These are often interactive and a great way to learn with your peers.
  • Watch for repeats AWS is known for adding in repeat Breakout sessions for ones that are extremely popular. Keep your eye on the AWS Events app for updates throughout the week.
  • Get ahead of the pack After Adam Selipsky’s Keynote there will likely be sessions released to cover off on new services that are announced. Get ahead of the pack by attending these.
  • No fomo Most of the Breakout sessions are recorded and posted online after re:Invent is over. Fear not if you miss a session that you had your eyes on, you can always view it later while eating your lunch, on a break or doing your business.
  • Get engaged – Don’t be afraid to engage with presenters after the sessions. They are typically there to provide information and love answering questions. Some presenters will also offer up their contact information so that you can follow up again at a later time. Don’t be shy and snag some contact cards for topics relevant to your interests.
  • Bring the XL suitcase – Now that we are back in person, get ready to fill that swag bag! You will need room to bring all that stuff home so have extra room in your suitcase when you arrive.
  • Don’t just swag and run – Look, we all love stuffing the XL suitcase with swag, but don’t forget to engage your peers at the booths while hunting the hottest swag give-a-ways. Remember that part of the re:Invent experience is to make connections and meet people in your industry. Enjoy it. Even if it makes you a little uncomfortable.
  • Pro tip! Another option if you missed out on a reserving a session you wanted is to try and schedule something else that is near it at the same time. This will allow you to do a drive by on the session you really wanted and see if there is an open spot. Worst case, head to your back up session that you were able to schedule.

Our re:Invent Predictions

Now that we have you well prepared for the conference, here are a couple of our predictions for what we will see this year. We are not always right on these, but it’s always fun to guess.

  • RDS savings plans will become a reality.
  • Specialty instance types targeted at specific workloads (similar to the new VT1 instance they just announced focused on video).
  • Security hub add-ons for more diverse compliance scanning.
    • Expanded playbooks for compliance remediation.
    • More compliance frameworks to choose from.
  • Potential enhancements to Control Tower.
  • Virtual only attendees will not get the opportunity for the coveted re:Invent hoodie this year.

In Closing…

We are sure that after December 3rd there will be an overwhelming number of new services to sift through but once the re:Invent 2021 hangover subsides, 2nd Watch will be at the ready and by your side to help you consume and adopt the BEST solutions for your cloud journey. Swing by our booth #702 for some swag and a chat. We are giving away Gretsch Guitars we are super excited to see you!

Finally, don’t forget to schedule a meeting with one of our AWS Cloud Solution Experts while you’re at re:Invent. We would love to hear all about your cloud journey! We hope you are as excited as we are this year and we look forward to seeing you in Las Vegas.

-Dustin Snyder, Director of Cloud Infrastructure & Architecture

An Introduction to AWS Proton

As a business scales, so does its software and infrastructure. As desired outcomes adapt and become more complex that can quickly cause a lot of overhead and difficulty for platform teams to manage over time and these challenges often limit the benefits of embracing containers and serverless. Shared services offer many advantages in these scenarios by providing a consistent developer experience while also increasing productivity and effectivity of governance and cost management.

Introduced in December 2020 Amazon Web Services announced the general availability of Proton: an application targeted at providing tooling to manage complex environments while bridging infrastructure and deployment for developers. In this blog we will take a closer look into the benefits of the AWS Proton service offering.

What is AWS Proton?

AWS Proton is a fully managed delivery service, targeted at container and serverless workloads, that provides engineering teams the tooling to automate provisioning and deploy applications while enabling them to provide observability and enforce compliance and best practices. With AWS Proton, development teams utilize resources for infrastructure and to deploy their code. This in turn increases developer productivity by allowing them to focus on their code, software delivery, reduce management overhead, and increase release frequency. Teams can use AWS Proton through the AWS Console and the AWS CLI, allowing for teams to get started quickly and automate complicated operations over time.

How does it work?

The AWS Proton framework allows administrators to define versioned templates which standardize infrastructure, enforce guard rails, leverage Infrastructure as Code with CloudFormation, and provide CI/CD with Code Pipeline and Code Build to automate provisioning and deployments. Once service templates are defined, developers can choose a template and use it to deploy their software. As new code is released, the CI/CD pipelines automatically deploys the changes. Additionally, as new template versions are defined, AWS Proton provides a “one-click” interface which allows administrators to roll out infrastructure updates across all the outdated template versions.

When is AWS Proton right for you?

AWS Proton is built for teams looking to centrally manage their cloud resources. The service interface is built for teams to provision deploy and monitor applications. AWS Proton is worth considering if you are using cloud native services like Serverless applications or if you utilize containers in AWS. The benefits continually grow when working with a service-oriented architecture, microservices, or distributed software as it eases release management, reduces lead time, and creates an environment for teams to operate within a set of rules with little to no additional overhead. AWS Proton is also a good option if you are looking to introduce Infrastructure as Code or CI/CD pipelines to new or even existing software as AWS Proton supports linking existing resources.

Getting Started is easy!

Platform Administrators

Since AWS Proton itself is free and you only pay for the underlying resources, you are only a few steps away from giving it a try! First a member of the platform infrastructure team creates an environment template. An environment defines infrastructure that is foundational to your applications and services including compute networking (VPCs), Code Pipelines, Security, and Monitoring. Environments are defined via CloudFormation templates and use Jinja for parameters rather than the conventional parameters section in standard CloudFormation templates. You can find template parameter examples in the AWS documentation. You can create, view, update, and manage your environment templates and their versions in the AWS Console.

Once an environment template is created the platform administrator would create a service template which defines all resources that are logically relative to a service. For example, if we had a container which performs some ETL this could contain an ECR Repository, ECS Cluster, ECS Service Definition, ECS Task Definition, IAM roles, and the ETL source and target storage.

In another example, we could have an asynchronous lambda which performs some background tasks and its corresponding execution role. You could also consider using schema files for parameter validation! Like environment templates, you can create, view, update, and manage your service templates and their versions in the AWS Console.

Once the templates have been created the platform administrator can publish the templates and provision the environment. Since services also include CI/CD pipelines platform administrators should also configure repository connections by creating the GitHub app connector. This is done in the AWS Developer Tools service or a link can be found on the AWS Proton page in the Console.

Once authorized, the GitHub app is automatically created and integrated with AWS and CI/CD pipelines will automatically detect available connections during service configuration.

 

At this time platform administrators should see a stack which contains the environment’s resources. They can validate each resource, interconnectivity, security, audits, and operational excellence.

Developers

At this point developers can choose which version they will use to deploy their service. Available services can be found in the AWS Console and developers can review the template and requirements before deployment. Once they have selected the target template they choose the repository that contains their service code, the GitHub app connection created by the platform administrator, and any parameters required by the service and CodePipeline.

After some time, developers should be able to see their application stack in CloudFormation, their application’s CodePipeline resources, and the resources for their application accordingly!

In Closing

AWS Proton is a new and exciting service available for those looking to adopt Infrastructure as Code, enable CI/CD pipelines for their products, and enforce compliance, consistent standards, and best practices across their software and infrastructure. Here we explored a simple use case, but real world scenarios likely require a more thorough examination and implementation.

AWS Proton may require a transition for teams that already utilize IaC, CI/CD, or that have created processes to centrally manage their platform infrastructure. 2nd Watch has over 10 years’ experience in helping companies move to the cloud and implement shared services platforms to simplify modern cloud operations. Start a conversation with a solution expert from 2nd Watch today and together we will assess and create a plan built for your goals and targets!

-Isaiah Grant, Cloud Consultant

Cloud Crunch Podcast: How McDonald’s France is Using Data Lakes to Improve Customer Experience

Adrien Sieg, Head of Data at McDonald’s Global Technology France, Christina Moss, Director of AWS Cloud Services at McDonald’s, and Mathieu Rimlinger, Director of Global Technology France at McDonald’s, talk about their latest technological advancements in the cloud and how McDonald’s is using data lakes to set customer expectations and improve satisfaction. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.

5 Benefits of VMware Cloud on AWS

Everyone’s journey to the cloud is different. Before deciding your direction, you should consider your business goals, risk tolerance, internal skills, cost objectives, and existing technology ecosystem.  For some, the choice is a 100% native cloud-first strategy on a single Cloud Service Provider (CSP). Others will use a mixture of services across multiple providers. And some others will choose a hybrid strategy in some form.  For a hybrid approach, an interesting option worth considering is leveraging VMware Cloud (VMC) on AWS.

VMware Cloud on AWS is a great solution to consider whether you are integrating your on-prem work environment into the cloud, evacuating your datacenter, scaling datacenter extensions, looking at disaster recovery (DR), or focusing on remote workforce enablement.

What is VMware Cloud on AWS?

About three years ago, hundreds of engineers from VMware and AWS spent more than two years bringing the VMware Cloud solution to market. VMware Cloud on AWS refers to the VMware infrastructure stack or VMware cloud foundation. It encompasses the three infrastructure software pieces that VMware is known for: vSphere, NSX and vSAN. vSphere provides virtualization of compute, NSX is virtualization of the network, and vSAN virtualizes storage. VMC is an instance of the vCloud foundation being executed on AWS bear metal hardware. When you sign up for a VMware Cloud account, you can get access to the entire VMware stack in an AWS availability zone in just 90 minutes.

Traditionally, VMware has been in datacenters. Now, you can combine those servers into one piece of hardware. With AWS, you can now move functionality to the cloud and enjoy the many benefits of this platform.

1. Expanded functionality

There is so much more functionality in the VMware stack than in the cloud alone. There’s also more functionality in the cloud than you can build in your own environment. VMware Cloud on AWS is more than just a traditional VMware stack. It’s all the functionality of NSX, vSAN, and vSphere, plus the latest additions, at your fingertips, allowing you to always run the latest version of VMware to have access to the newest features. VMware takes care of the maintenance, upgrading, and patching, and with VMC being placed in AWS, you have instant access to all of the AWS cloud features in close physical proximity to your application, allowing you to experience improved performance.

2. Easy adoption

If you’re new to the cloud and have experience with VMware, you will easily be able to apply those existing on-prem skills to VMC on AWS. Because vShere on-prem is the same as the vSphere on AWS, it’s backwards compatible. The traditional management interface of the vCenter has the same look and feel and operates the same in the cloud as it does on-prem. These mirrored interfaces allow you to preserve the investment you have made in your existing VMware administrators, keeping headcount and employee costs down because you don’t have to hire for new skills or ask existing techs to increase their skillset. This quick familiarity lets you ramp up and use the service much faster than bringing in a completely new platform.

3. Agile scaling capability

After COVID-19 safety precautions sent 80-90% of the workforce home, organizations scrambled to enable and protect their new remote workers. Datacenters and BDi farms weren’t built to scale for the influx, and it’s just not possible to build additional datacenters as fast as necessary. Organizations needed to find already-built hardware and available datacenters and software that could meet their needs quickly. VMC on AWS solves the problem because it is built to scale without the limitations of on-prem environments.

4. Transition from CAPEX to OPEX

A fundamental change people are seeing from VMC on AWS is the ability to move from a capital expenditures (CAPEX) model to an operating expenditures (OPEX) model, freeing you from exceptionally long and expensive contracts for datacenters and DR locations.

With VMC, you can move to an OPEX model and spread your cost out over time, and the hardware, maintenance, and upgrades are no longer your responsibility. On top of that, the savings in headcount, manpower, and man hours creates a conversation between IT and financial staff as to what’s best for the overall organization.

5. Lower costs

Chances are, you’re already using VMware and recognize it as a premium brand, so if you’re looking at cost solely from a compute point of view, it might appear as if costs are higher. However, if you add up the individual expenses you incur without VMC – including real estate, hardware, software maintenance, headcount, management, travel costs – and compare that to VMC on AWS, you see the cost benefit ratio in favor of VMC. And additional resources are saved when you consider all the management roles that are no longer your responsibility.  VMware also offers a hybrid loyalty program with incentives and savings for customers who are already invested in the VMware ecosystem.

2nd Watch holds the VMware Cloud on AWS Master Services Competency. If you’re considering the next step in your cloud journey, Contact Us to learn more about our team of VMware Cloud experts, available to help you navigate the best platform for your goals.

5 Best Practices for Managing the Complexities of a Hybrid Cloud Strategy

Hybrid cloud strategies require a fair amount of effort and knowledge to construct, including for infrastructure, orchestration, application, data migration, IT management, and potential issues related to silos. There are a number of complexities to consider to enable seamless integration of a well-constructed hybrid cloud strategy. We recommend employing these 5 best practices as you move toward a multi-cloud or hybrid cloud architecture to ensure a successful transition.

Utilize cloud management tools.

Cloud management providers have responded to the complexities of a hybrid strategy with an explosion of cloud management tools. These tools can look at your automation and governance, lifecycle management, usability, access and more, and perform many tasks with more visibility.

Unique tooling for each cloud provider is especially important. Some partners may recommend a single pane of glass for simplicity, but that can be too simple for service catalogues and when launching new resources. The risk with going too simplistic is missing the opportunity to take advantage of the best aspects of each cloud.

Complete a full assessment of applications and dependencies first.

Before you jump into a hybrid cloud strategy, you need to start with a full assessment of your applications and dependencies. A common misstep is moving applications to the public cloud, while keeping your database in your private cloud or on-prem datacenter. The result is net latency drag, leading to problems like slow page loads and videos that won’t play.

Mapping applications and dependencies to the right cloud resource prior to migration gives you the insight necessary for a complete migration with uninterrupted performance. Based on the mapping, you know what to migrate when, with full visibility into what will be impacted by each. This initial step will also help with cloud implementation and hybrid connect down the line.

Put things in the right place.

This might sound obvious, but it can be challenging to rationalize where to put all your data in a hybrid environment. Start by using the analysis of your applications and dependencies discussed above. The mapping provides insight into traffic flows, networking information, and the different types of data you’re dealing with.

A multi-cloud environment is even more complex with cost implications and networking components. On-prem skills related to wide area network (WAN) connectivity are still necessary as you consider how to monitor the traffic – ingress, egress, east, and west.

Overcome silos.

Silos can be found in all shapes and sizes in an organization, but one major area for silos is in your data. Data is one of the biggest obstacles to moving to the cloud because of the cost of moving it in and out and accessing it. The amount of data you have impacts your migration strategy significantly, so it’s critical to have a clear understanding of where data may be siloed.

Every department has their own data, and all of it must be accounted for prior to migrating. Some data silo issues can be resolved with data lakes and data platforms, but once you realize silos exist, there’s an opportunity to break them down throughout the organization.

An effective method to breaking down silos is by getting buy-in from organizational leaders to break the cultural patterns creating silos in the first place. Create a Cloud Center of Excellence (CCoE) during your cloud transformation to understand and address challenges within the context of the hybrid strategy across the organization.

Partner with proven experts.

Many companies have been successful in their hybrid cloud implementation by leveraging a partner for some of the migration, while their own experts manage their internal resources. With a partner by your side, you don’t have to invest in the initial training of your staff all at once. Instead, your teams can integrate those new capabilities and skills as they start to work with the cloud services, which typically increases retention, reduces training time, and increases productivity.

Partners will also have the knowledge necessary to make sure you not only plan but implement and manage the hybrid architecture for overall efficiency. When choosing a partner, make sure they’ve proven the value they can bring. For instance, 2nd Watch is one of only five VMware Cloud on AWS Master Services Competency holders in the United States. That means we have the verified experience to understand the complexities of running a hybrid VMware Cloud implementation.

If you’re interested in learning more about the hybrid cloud consulting and management solutions provided by 2nd Watch, Contact Us to take the next step in your cloud journey.

-Dusty Simoni, Sr Product Manager, Hybrid Cloud

3 Reasons to Consider a Hybrid Cloud Strategy

If there’s one thing IT professionals can agree on, it’s that hybrid cloud computing isn’t going away. Developed in response to our growing dependence on data, the hybrid cloud is being embraced by enterprises and providers alike.

What is Hybrid Cloud Computing?

Hybrid cloud computing can be a combination of private cloud, like VMware, and public cloud; or it can be a combination of cloud providers, like AWS, Azure and Google Cloud. Hybrid cloud architecture might include a managed datacenter or a company’s own datacenter. It could also include both on-prem equipment and cloud applications.

Hybrid cloud computing gained popularity alongside the digital transformation we’ve witnessed taking place for years. As applications evolve and become more dev-centric, they can be stored in the cloud. At the same time, there are still legacy apps that can’t be lifted and shifted into the cloud and, therefore, have to remain in a datacenter.

Ten years ago, hybrid and private clouds were used to combat growth, but now we’re seeing widespread adoption from service providers to meet client needs. The strategy has range from on-prem up to the cloud (VMware Cloud (VMC) on AWS), to cloud-down (AWS Outposts), to robust deployment and management frameworks for any endpoint (GCP Anthos).

With that said, for many organizations data may never entirely move to the cloud. A company’s data is their ‘secret sauce,’ and despite the safety of the cloud, not everything lends itself to cloud storage. Depending on what exactly the data is –mainframes, proprietary information, formulas – some businesses don’t feel comfortable with service providers even having access to such business-critical information.

1. Storage

One major reason companies move to the cloud is the large amount of data they are now storing. Some companies might not be able to, or might not want to, build and expand their datacenter as quickly as the business and data requires.

With the option for unlimited storage the cloud provides, it is an easy solution. Rather than having to forecast data growth, prioritize storage, and risk additional costs, a hybrid strategy allows for expansion.

2. Security

The cloud is, in most cases, far more secure than on-prem. However, especially when the cloud first became available, a lot of companies were concerned about who could see their data, potential for leaks, and how to guarantee lockdown. Today, security tools have vastly improved, visibility is much better, and the compliance requirements for cloud providers include a growing number of local and federal authorities. Additionally, third party auditors are used to verify cloud provider practices as well as internal oversight to avoid a potentially fatal data breach. Today, organizations large and small, across industries, and even secret government agencies trust the cloud for secure data storage.

It’s also important to note that the public cloud can be more secure than your own datacenter. For example, if you try to isolate data in your own datacenter or on your own infrastructure, you might find a rogue operator creating shadow IT where you don’t have visibility. With hybrid cloud, you can take advantage of tools like AWS Control Tower, Azure Sentinel, AWS Landing Zone blueprints, and other CSP security tools to ensure control of the system. Similarly, with tooling from VMware and GCP Anthos you can look to create single policy and configuration for environment standardization and security across multiple clouds and on-prem in a single management plane.

3. Cost

Hybrid cloud computing is a great option when it comes to cost. On an application level, the cloud lets you scale up or down, and that versatility and flexibility can save costs. But if you’re running always-on, stagnant applications in a large environment, keeping them in a datacenter can be more cost effective. One can make a strong case for a mixture of applications being placed in the public cloud while internal IP apps remain in the datacenter.

You also need to consider the cost of your on-prem environment. There are some cases, depending on the type and format of storage necessary, where the raw cost of a cloud doesn’t deliver a return on investment (ROI). If your datacenter equipment is running near 80% or above utilization, the cost savings might be in your favor to continue running the workload there. Alternately, you should also consider burst capacity as well as your non-consistent workloads. If you don’t need something running 24/7, the cloud lets you turn it off at night to deliver savings.

Consistency of Management Tooling and Staff Skills

The smartest way to move forward with your cloud architecture – hybrid or otherwise – is to consult with cloud computing experts. 2nd Watch helps you choose the most efficient strategy for your business, aids in planning and completing migration in an optimized fashion, and secures your data with comprehensive cloud management. Contact Us to take the next step in your cloud journey.

-Dusty Simoni, Sr Product Manager, Hybrid Cloud

Cloud DR: Recovery Begins Well Before Disaster Strikes

In the world of IT, disasters come in all shapes and sizes from infrastructure and application outages, to human error, data corruption, ransomware, malicious attacks, and other unplanned events.  Other than perhaps a hurricane or blizzard, we often don’t have visibility into when a disaster will occur.  After the immediate impact of the disaster subsides, the focus rapidly shifts to the recovery.

At the core of the disaster recovery is a focus on how quickly applications and data can be restored to resume servicing your customers. Downtime means a loss of productivity, revenue, or even profit from credits being paid out to your customers for failure to maintain service.

But disaster recovery goes well beyond the post-crisis events, and its success hinges on the preparation done well in advance of any disaster occurring. Now, a disaster recovery strategy should not be confused with a business continuity plan. A business continuity plan is far greater in scope, covering not only recovering your IT systems, data, and applications to service customers again, but how to continue running your business even beyond IT system disruptions.  For example, a business continuity plan will outline what steps to take when the physical building becomes unavailable and your employees can’t come into the office; how to handle supply chain disruptions, etc.

When discussing disaster recovery strategies, often times back-up and disaster recovery are used synonymously.  Back-up should factor into your business continuity planning, and in some cases a back-up may be sufficient in restoring your systems and meeting compliance requirements.  However, back-ups are a point-in-time solution and can take significant time to restore your systems, delaying your recovery time. Compounding this dilemma, back-ups are only as up to date as the last snapshot taken, which, for many, could mean losing a complete day’s worth of sales.  A solid disaster recovery strategy should not only focus on recovering your systems but do it in a manner that exceeds the business requirements and minimizes the disruption your customers.

Traditional disaster recovery solutions have really required significant investment from both a financial perspective and a human resource perspective.  It’s not unusual for enterprises to be required to purchase fully redundant hardware and duplicative software licenses, locate that hardware in geographically disbursed colo facilities, set-up connectivity and replication between the two sites, and have IT admins maintain the second site, which is commonly under-utilized.

Cloud based disaster recovery has solved many of these problems and can do it for a fraction of the price. To help bring this solution to our customers, 2nd Watch has partnered with CloudEndure, an AWS Company, to help enterprises accelerate their adoption of Cloud Disaster Recovery.

The CloudEndure Disaster Recovery solution replicates everything in real time, meaning everything is always up to date, down to the second, allowing you to achieve your Recovery Point Objectives (RPOs).  CloudEndure provisions a very low-cost staging area in AWS, eliminating the need for duplicate resource provisioning. Should a disaster occur, automated orchestration combined with machine conversion enables you to achieve a Recovery Time Objectives (RTOs) of minutes and only pay for the cloud instances when actually needed.

Our Cloud Disaster Recovery service provides you a disaster recovery proof of concept for 100 machines in less than 30 days, while allowing you to continue to leverage your entire existing infrastructure.  We apply our proven methodology to ensure your organization is getting optimal value from your existing infrastructure while allowing fast, easy, and cost-effective recovery in the AWS cloud.

Download our datasheet to learn more about our Cloud Disaster Recovery service.

-Dusty Simoni, Sr Product Manager

Cloud Crunch Podcast: Hybrid Cloud Computing

This week on Cloud Crunch, we welcome our first guest, Dusty Simoni, Sr Product Manager at 2nd Watch, to discuss hybrid cloud computing. We dive into what hybrid cloud is is, examples of hybrid, benefits, complexities, and how to get started. For this conversation, we look at hybrid cloud as on-premises infrastructure and public cloud – specifically around AWS, Azure and VMware – and exclude private cloud services. Listen now on Spotify, iHeart Radio, iTunes, or wherever you get your podcasts.

AWS re:Invent 2019: Daily Recap – Tuesday

Day 2 of AWS re:Invent 2019 kicked off with the Las Vegas strip turning into a parking lot as many attendees spent upwards of an hour getting from their hotels to the Sands Expo Convention Center at the Venetian. The increase in attendance this year to almost 65,000 attendees is obvious!

Once you navigated the traffic and arrived at the convention, the highlight of the day was AWS CEO, Andy Jassy’s, Keynote address.

Jassy began with emphasizing that many companies are still trying to make the cloud transformation and struggle or get stuck in the process.  According to Jassy, in order for a company to make a successful transformation to the cloud, it must have four things:

  1. Senior leadership conviction and alignment
  2. Top-down aggressive goals
  3. Training for its builders
  4. Refusal to let paralysis stop you before you start

As is typical of his keynote, today’s was filled with announcements of new features on AWS, largely geared for the Enterprise.  We captured 22 new features in all:

The first announcement was of AWS’ new in-house developed Graviton 2 chip EC2 instances capable of delivering 40% better price for performance.

Other features announced were:

AWS Compute Optimizer, a new machine learning-based recommendation service that makes it easy for you to ensure that you are using optimal AWS Compute resources

The ability to run Kubernetes pods on AWS Fargate using Amazon EKS. There’s no need to provision or manage infrastructure, and you pay for resources at pod-level with secure pod-level isolation by design

Amazon S3 Access Points, a new S3 feature that simplifies managing data access at scale for shared data sets on Amazon S3.

AQUA (the Advanced Query Accelerator) for Amazon Redshift, a hardware-accelerated cache that promises up to 10x better query performance than competing cloud-based data warehouses

Amazon Redshift Federated Query that allows you to query and analyze data across operational databases, data warehouses, and data lakes.

Amazon Redshift RA3 Instances with Managed Storage that allows you to size your cluster based only on your compute needs.

UltraWarm, which lets you store and interactively analyze your data using Elasticsearch and Kibana

Amazon Managed Cassandra Service, thate nables you to run your Cassandra workloads in the AWS Cloud using the same Cassandra application code

Amazon Sagemaker received a ton of love during the keynote with several significant announcements:

Amazon Sagemaker Studio is an integrated development environment (IDE) for machine learning (ML) that lets you easily build, train, debug, deploy and monitor your machine learning models.

Amazon Sagemaker Experiments lets you organize, track, and compare your machine learning training experiments on Amazon Sagemaker.

Amazon Sagemaker Notebooks allows developers to spin up machine learning notebooks in seconds.

Amazon Sagemaker Debugger is a new capability that provides complete insights into the training process of machine learning models.

Amazon Sagemaker Model Maker detects concept drift by monitoring models deployed to production, automatically.

With Amazon Sagemaker Autopilot, Amazon SageMaker can use your tabular data and the target column you specify to automatically train and tune your model, while providing full visibility into the process.

Still more announcements included:

Amazon CodeGuru performs automated code reviews for development teams.

Amazon Kendra is a new enterprise search powered by machine learning and natural language.

AWS Local Zones place compute, storage and database services close to large cities, beginning with Los Angeles.

Amazon Fraud Detector is a fully managed service to easily identify potentially fraudulent online activities such as online payment fraud and the creation of fake accounts.

Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities.

AWS Wavelength provides seamless access to the breadth of AWS services by embedding AWS compute and storage services at the edge of telecommunications providers’ 5G networks.

And the announcement I a most excited about – the GA launch of AWS Outposts.  Outposts brings AWS public cloud functionality to your on-premises data center. For clients that have struggled with full cloud adoption for various reasons, such as regulatory concerns, data sovereignty, physical security concerns, latency issues, migration issues, etc., Outposts addresses all of these concerns.  The other reason I am extremely excited about Outposts is because 2nd Watch is one of AWS’ Outpost launch partners able to help you explore this option today!

That wrapped the Keynote highlights for Tuesday and leaves us looking forward to Amazon.com CTO, Dr. Werner Vogels’, Keynote on Wednesday along with the 2nd Watch After Party.  See you there!

-Dusty Simoni, Sr. Product Manager