1-888-317-7920 info@2ndwatch.com

Managed Cloud, AMS, and the Enterprise – The Hows and Whys

Read this article on ChannnelPartnerInsights

It’s easy to forget that when enterprises first started moving to the cloud, it was a largely simple process that saw only a handful of people within an organization using the technology. But as its usage has become more prevalent, on-site infrastructure and IT operations teams have found themselves having to manage cloud environments, which has not only created a skills gap in many enterprises, but also given rise to cost inefficiencies as teams have either become spread more thinly, or, more likely, organizations have had to hire additional staff to manage their cloud environments. All of this can be compounded by trying to successfully integrate a cloud environment into an existing operation’s security structure.

The good news is that as cloud offerings have developed, all of these challenges can be addressed by managed cloud services. They help remove additional costs by negating the need for additional staff, as well as removing the complexity of trying to run a cloud environment for a large enterprise that wants to focus on running its business rather than running its infrastructure.

As managed cloud services continue their reach into the mainstream, customers will need to be educated on the myriad benefits the offering presents. Services such as AWS Managed Services (AMS) can offer enterprises a much easier cloud experience that doesn’t have to impinge upon the day-to-day running of the business.

Why managed cloud?

For clients questioning why they would benefit from a managed cloud offering, the first thing to note is that there is a clear reduction in the operational costs of cloud to be found. Enterprises no longer have to hire staff or spend time training existing staff to manage their cloud infrastructure. Alongside this, with a managed cloud services offering, enterprises have direct access to a team with a high level of skill set in cloud services and who will handle that portion of the organization’s infrastructure. Aspects like logging, monitoring, event management, continuity management, security and access management, patching, provisioning, incidents and reporting are all included in a managed cloud service offering.

AMS in particular is a highly automated offering, meaning that implementation is straightforward and much quicker than regular cloud implementations. It also features out-of-the-box compliance, such as PCI, HIPAA and GDPR, meaning that security postures won’t be disrupted during or after implementation. The service’s automation also allows for requests for change to be done within minutes, versus having to wait for an in-house IT infrastructure team to approve something before it can be changed.

And managed cloud services can have a significant impact upon an enterprise’s operations. For example, one of our clients – an ISV – was experiencing considerable challenges when evolving its product into a SaaS offering. While it was able to service the product, it wasn’t able to service the cloud infrastructure hosting the SaaS product. Using a managed cloud service – in this case AMS – meant the organization no longer had to manage that infrastructure itself and has since been able to decrease its time to resolution, as well as its cost of operations.

Further, the change enabled the ISV to be able to better predict their cost of goods sold given that AMS is a relatively steady monthly statement. This allows ISVs to consistently measure margin on their SaaS product offering.

Making the move to AMS

Migrating to AMS from on-premise infrastructure or an existing AWS environment is a straightforward process that consists of four key stages:

  1. Discovering what exists today and what needs to migrate
  2. Identifying the architecture to migrate (single account or multi-account landing zone)
  3. Identifying the migration plan (scheduled app migration in ‘waves’)
  4. Migrating to AMS

For customers on alternative cloud infrastructures, such as Google Cloud or Microsoft Azure, the migration to AMS is similar. The only bit of heavy lifting (for customers on any cloud platform) can come in integrating an existing operations team with the AMS operations teams so that they know how to work together if there’s a request, an update, or a problem.

Preparing for and performing this people-and-process integration upfront considerably reduces the complexity of cloud operations. This merger of operations usually flows from discovery and doesn’t end until the migration has been tested and the team is operating efficiently.

The path to AMS is a very structured, concrete process, which means clients don’t have to make myriad new decisions on their own. The onboarding process is streamlined and enables us as AMS partners to provide a true timeline for onboarding – something that can often be difficult when you’re dealing with a very large cloud migration.

For example, with AMS we know that discovery and planning take about three weeks, and building out the AMS landing zone takes about three weeks, and you can’t run these steps concurrently. We’ve received client feedback telling us that offering these timescales has been key to their comfort in engaging with this process and knowing they can get it done – clients don’t want an open-ended project that takes six years to migrate.

When it comes down to it, the cloud goals for the majority of customers is to streamline business processes and, ultimately, improve their bottom line. Using a managed cloud service like AMS can reduce costs, reduce operational challenges and increase security, making for a much smoother and easier experience for the enterprise, and a lucrative, open-ended opportunity for channel partners.

-Contributed article by Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

Completing Your Company’s Cloud Transformation with Azure Windows Virtual Desktop Foundations

The completion of your IT transformation from data center to full cloud adoption can often be hindered by your desktop administration.  While in the past virtual desktops have largely delivered on their promise to bring standardization, reduce the proliferation of applications and simplify desktop management, they have also had their share of challenges.  As an administrator you had few options to:

  • Create multiple VM pools with customized images for different user roles
  • Overload virtual machine images with more apps than needed and hide or block them from the user making the image bigger
  • Utilize dynamic app streaming which required additional infrastructure to be managed.

With Windows Virtual Desktop, Microsoft Azure has transformed virtual desktop delivery by completely separating the user profile data and application delivery from the Operating System to deliver a user experience that parallels that of a physical device, simplifies desktop administration further, and maintains the management of the underlying physical infrastructure.

Benefits of adopting Azure Windows Virtual Desktops (WVD):

  • Support for Windows 10 and Windows 7 virtual desktops – the only way to safely run Windows 7 after its End of Life (Jan. 14, 2020)
  • No need to overprovision hardware by aligning costs to business needs – transition from costly CAPEX hardware purchases to OPEX cloud consumption-based model
  • Simplify user administration by using Azure Active Directory (AAD) – leverage additional security controls like multifactor authentication (MFA) or conditional access
  • Highly secure with reverse connect technology – eliminates the need to open inbound ports to the VMs, and all user sessions are isolated in both single and multi-session environments
  • Utilize Microsoft Azure native services – Azure Files for file share and Azure NetApp Files for volume level backups

To help you with the transition from standard desktops or an existing on-premises RDS deployment to Microsoft Azure, 2nd Watch has developed Windows Virtual Desktop Foundations.  Windows Virtual Desktop Foundations provides you the blueprints necessary to set up the WVD environment, integrate with Azure native services, create a base Windows image, and train your team on how to create custom images.

With 2nd Watch Windows Virtual Desktop Foundations, you get:

  • Windows Virtual Desktop environment setup
  • Integration with Azure native services (AAD and AZ Storage for profiles)
  • Image build process set-up
  • A baseline custom Windows image
  • Team training on creating custom images
  • AZ Monitor setup for alerting

To learn more about our WVD Foundations, download our datasheet.

-Dusty Simoni, Sr Product Manager

Facebooktwitterlinkedinmailrss

AWS Outposts Overview – Deep Dive

AWS Outposts are fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises, while seamlessly connecting to AWS’ broad array of services in the cloud. Here’s a deeper look at the service.

As an AWS Outposts Partner, 2nd Watch is able to help AWS customers overcome challenges that exist due to managing and supporting infrastructures both on-premises and in cloud environments and delivering positive outcomes at scale. Our team is dedicated to helping companies achieve their technology goals by leveraging the agility, breadth of services, and pace of innovation that AWS provides. Read more

Facebooktwitterlinkedinmailrss

Optimizing your environment using AWS Savings Plans

Surprisingly, AWS has very quietly released a major enhancement/overhaul to purchasing compute resources up front. To date, purchasing Reserved Instances (Standard or Convertible) has offered AWS users great savings for their static workloads. This works because static workloads tend to utilize a set number of resources and RIs are paid for in advance, thereby justifying the financial commitment. That said, how often do today’s business needs remain constant, particularly with today’s product development? So, until now, you had two choices if you couldn’t use your RIs: take the loss and let the RI term run out or undertake the hassle of selling it on the marketplace (potentially for a loss). AWS Savings Plans, on the other hand, provide a gigantic leap forward in solving this problem. In fact, you will find that these plans provide far more flexibility and return for your investment than the standard RI model.

Here is the gist of the program, taken from the AWS site:

Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate usage.

Savings Plans offer significant savings over On Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one- or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer. (Jeff Barr, AWS, 11.06.2019)

This is HUGE for AWS clients, because now, for the first time ever, savings can also be applied to workloads that leverage serverless containers—as well as traditional EC2 instances!

Currently there are two AWS Savings Plans, and here’s how they compare:

EC2 Instance Savings Plan Compute Savings Plan
Offers discount levels up to 72% off on-demand rates (same as RIs). Offers discount levels up to 66% off on-demand rates (the same rate as Convertible RIs).
Any changes in instances are restricted to the same AWS region. Spans regions. This could be a huge draw for companies with need for regional or national coverage.
Restricts EC2 instance types to the same family, but allows change in instance size and OS (e.g., t3.medium to t3.2xlarge). More flexible. Does not limit EC2 instance families or OS, and therefore, you are no longer locked into a specific instance family at the moment of purchase, as you would be with a traditional RI.
EC2 instances only: Similar to convertible RIs, this plan allows you to increase instance size, with a new twist: you can also reduce instance size! Yes, this means you may no longer have to sell your unused RIs on the marketplace! Allows clients to mix-and-match AWS products, such as EC2 and Fargate; extremely beneficial for clients who use a range of environments for their workloads.
BOTTOM LINE: Slightly less flexible, but you garner a greater discount. BOTTOM LINE: More flexible, but with less of a discount.

As with standard RI purchases, understanding your workloads will be key to determining when to use AWS Savings Plans vs. standard RIs (RIs aren’t going anywhere, but we recommend that Savings Plans be used in place of RIs moving forward) vs. On-Demand (including analysis of potential savings from auto-parking, seasonality, elasticity, and so on). Sound a bit overwhelming? Fear not! This is where 2nd Watch’s Cloud Optimization service excels! Enrollment starts with a full analysis of your organization’s usage, AWS environment, and any other requirements/restrictions your organization may have. The final result is a detailed report, expertly determined by our AWS-certified optimization engineers, with our savings findings and recommendations—customized just for you!

Due to the nature of AWS Savings Plans, they will bring the most immediate value to clients who are either new to AWS or don’t have any RI commitments currently on their account. This is due to the fact that AWS Savings Plans cannot, unfortunately, replace existing RI purchases. Whatever your goals, our optimization experts are ready to help you plan the most strategically efficient and cost effective “next step” of your cloud transformation.

And that’s just the beginning

If you think that AWS Savings Plans may benefit your new or existing AWS deployment, contact us to jumpstart an analysis.

-Jeff Collins, Cloud Optimization Product Management

Facebooktwitterlinkedinmailrss

AWS re:Invent 2019: AWS Product/Service Review, a Networking Perspective

Announcements for days!

AWS re:Invent 2019 has come and gone, and now the collective audience has to sort through the massive list of AWS announcements released at the event.  According to the AWS re:Invent 2019 Recap communication, AWS released 77 products, features and services in just 5 days!  Many of the announcements were in the Machine Learning (ML) space (20 total), closely followed by announcements around Compute (16 total), Analytics (6 total), Networking and Content Delivery (5 total), and AWS Partner Network (5 total), amongst others.   In the area of ML, things like AWS DeepComposer, Amazon SageMaker Studio, and Amazon Fraud Detector topped the list.  While in the Compute, Analytics, and Networking space, Amazon EC2 Inf1 Instances, AWS Local Zones, AWS Outposts, Amazon Redshift Data lake, AWS Transit Gateway Network Manager, and Inter-Region Peering were at the forefront. Here at 2nd Watch we love the cutting-edge ML feature announcements like everyone else, but we always have our eye on those announcements that key-in on what our customers need now – announcements that can have an immediate benefit for our customers in their ongoing cloud journey.

All About the Network

In Matt Lehwess’ presentation, Advanced VPC design and new capabilities for Amazon VPC, he kicked off the discussion with a poignant note of, “Networking is the foundation of everything, it’s how you build things on AWS, you start with an Amazon VPC and build up from there. Networking is really what underpins everything we do in AWS.  All the services rely on Networking.” This statement strikes a chord here at 2nd Watch as we have seen that sentiment in action. Over the last couple years, our customers have been accelerating the use of VPCs, and, as of 2018, Amazon VPCs is the number one AWS service used by our customers, with 100% of them using it. We look for that same trend to continue as 2019 comes to an end.  It’s not the sexiest part of AWS, but networking provides the foundation that brings all of the other services together.  So, focusing on newer and more efficient networking tools and architectures to get services to communicate is always at the top of the list when we look at new announcements.  Here are our takes on these key announcements.

AWS Transit Gateway Inter-Region Peering (Multi-Region)

One exciting feature announcement in the networking space is Inter-Region Peering for AWS Transit Gateway.  This feature allows the ability to establish peering connections between Transit Gateways in different AWS Regions.  Previously, connectivity between two Transit Gateways could only be done through a Transit VPC which included the overhead of running your own networking devices as part of the Transit VPC.   Inter-Region peering for AWS Transit Gateway enables you to remove the Transit VPC and connect Transit Gateways directly.

The solution uses a new static attachment type called a Transit Gateway Peering Attachment that, once created, requires an acceptance or rejection from the accepter Transit Gateway.  In the future, AWS will likely allow dynamic attachments, so they advise you to create unique ASNs for each Transit Gateway for the easiest transition.  The solution also uses encrypted VPC peering across the AWS backbone.  Currently Transit Gateway inter-region peering support is available for gateways in US East (Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions with support for other regions coming soon.  You also can’t peer Transit Gateways in the same region.

(Source: Matt Lehwess: Advanced VPC design and new capabilities for Amazon VPC (NET305))

On the surface the ability to connect two Transit Gateways is just an incremental additional feature, but when you start to think of the different use cases as well as the follow-on announcement of Multi-Region Transit Gateway peering and Accelerated VPN solutions, the options for architecture really open up.  This effectively enables you to create a private and highly-performant global network on top of the AWS backbone.  Great stuff!

AWS Transit Gateway Network Manager

This new feature is used to centrally monitor your global network across AWS and on premises. The Transit Gateway network manager simplifies operational complexity of managing networks across regions and remote locations.  This AWS feature is another to take a dashboard approach to provide a simpler overview of your resources that may be spread over several regions and accounts. To use it, you create a Global Network within the tool which is an object in the AWS Transit Gateway Network Manager service that represents your private global network in AWS. It includes your AWS Transit Gateway hubs, their attachments, and on-premises devices, sites, and links.  Once the Global Network is created, you extend the configuration by adding in Transit Gateways, information about your on-premises devices, sites, links, and the Site-to-Site VPN connections with which they are associated, and start using it to visualize and monitor your network. It includes a nice geographic world map view to visualize VPNs (if they’re up/down impaired) or Transit Gateway Peering connections.

https://d1.awsstatic.com/re19/gix/gorgraphic.cdb99cd59ba34015eccc4ce5eb4b657fdf5d9dd6.png

There’s also a nice Topology feature that shows VPCs, VPNs, Direct Connect gateways, and AWS Transit Gateway-AWS Transit Gateway peering for all registered Transit gateways.  It provides an easier way to understand your entire global infrastructure from a single view.

Another key feature is the integration with SD-WAN providers like Cisco, Aviatrix, and others. Many of these solutions will integrate with AWS Transit Gateway Network Manager and automate the branch-cloud connectivity and provide end-to-end monitoring of the global network from a single dashboard. It’s something we look forward to exploring with these SD-WAN providers in the future.

AWS Local Zones

AWS Local Zones in an interesting new service that addresses challenges we’ve encountered with customers.  Although listed under Compute and not Networking and Content Delivery on the re:Invent 2019 announcement list, Local Zones is a powerful new feature with networking at its core.

Latency tolerance for applications stacks running in a hybrid scenario (i.e. app servers in AWS, database on-prem) is a standard conversation when planning a migration.  Historically, those conversations would be predicated by their proximity to an AWS region.  Depending on requirements, customers in Portland, Oregon may have the option to run a hybrid application stack, where those in Southern California may have been excluded.  The announcement of Local Zones (initially just in Los Angeles) opens up those options to markets that were not previously available.  I hope this is the first of many localized resource deployments.

That’s no Region…that’s a Local Zone

Local Zones are interesting in that they only have a subset of the services available in a standard region.  Local Zones are organized as a child of a parent region, notably the Los Angeles Local Zone is a child of the Oregon Region.  API communication is done through Oregon, and even the name of the LA Local Zone AZ maps to Oregon (Oregon AZ1= us-west-2a, Los Angeles AZ1 = us-west-2-lax-1a).  Organizationally, it’s easiest to think of them as remote Availability Zones of existing regions.

As of December 2019, only a limited amount of services are available, including EC2, EBS, FSx, ALB, VPC and single-zone RDS.  Pricing seems to be roughly 20% higher than in the parent region.  Given that this is the first Local Zone, we don’t know whether this will always be true or if it depends on location.  One would assume that Los Angeles would be a higher-cost location whether it was a Local Zone or full region.

All the Things

To see all of the things that were launched at re:Invent 2019 you can check out the re:Invent 2019 Announcement Page. For all AWS announcements, not just re:Invent 2019 launches (e.g. Things that launched just prior to re:Invent), check out the What’s New with AWS webpage. If you missed the show completely or just want to re-watch your favorite AWS presenters, you can see many of the re:Invent presentations on the AWS Events Youtube Channel. After you’ve done all that research and watched all those videos and are ready to get started, you can always reach out to us at 2nd Watch. We’d love to help!

-Derek Baltazar, Managing Consultant

-Travis Greenstreet, Principal Architect

Facebooktwitterlinkedinmailrss

Top 5 takeaways from AWS re:Invent 2019

AWS re:Invent always presents us with a cornucopia of new cloud capabilities to build with and be inspired by, so listing just a few of the top takeaways can be a real challenge.

There are the announcements that I would classify as “this is cool, I can’t wait to hack on this,” which for me, a MIDI-aficionado and ML-wannabe, would include DeepComposer. Then there are other announcements that fall in the “good to know in case I ever need it” bucket such as AWS LocalZones. And finally, there are those that jump out at us because “our clients have been asking for this, hallelujah!

I’m going to prioritize this list based on the latter group to start, but check back in a few months because, if my DeepComposer synthpop track drops on SoundCloud, I might want to revisit these rankings.

#5 AWS Compute Optimizer

“AWS Compute Optimizer uses machine learning techniques to analyze the history of resource consumption on your account and make well-articulated and actionable recommendations tailored to your resource usage.”

Our options for EC2 instance types continues to evolve and grow over time. These evolutions address optimizations for specialized workloads (e.g., the new Inf1 instances), which means better performance-to-cost for those types of workloads.

The challenge for 2nd Watch clients (and everyone else in the Cloud) is maintaining an up-to-date knowledge of the options available and continually applying the best instance types to the needs of their workloads on an ongoing basis. That is a lot of information to keep up on, understand, and manage, and you’re probably wondering, “how do other companies deal with this?”

The ones managing it best have tools (such as CloudHealth) to help, but cost optimization is an area that requires continual attention and experience to yield the best results. Where AWS Compute Optimizer will immediately add value is surfacing inefficiencies at zero cost of 3rd party tools to get started. You will need to have the CloudWatch agent installed to gather OS-level metrics for the best results, but this is a standard requirement for these types of tools. What remains to be seen in the coming months is how Compute Optimizer compares to the commercial 3rd party tools on the market in terms of uncovering overall savings opportunities. However, the obvious advantage for 3rd party tools remaining unaffected by this change will be their ability to optimize across multiple cloud service providers.

#4 Amazon ECS now supports Active Directory Authentication using Windows Accounts gMSA

“Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows ECS customers to authenticate and authorize their Windows containers with network resources using an Active Directory (AD). Customers can now easily use Integrated Windows Authentication with their Windows containers on ECS to secure services.”

This announcement was not part of any keynote, but thanks to fellow 2nd Watcher and Principal Cloud Consultant, Joey Yore, bringing it to my attention, it is definitely making my list. Over the course of the past year, several of our clients on a container adoption path for their .NET workloads were stymied by this very lack of Windows gMSA support.

Drivers for migrating these .NET apps from EC2 to containers includes easier blue/green deployments for faster time-to-market, simplified operations by minimizing overall Windows footprint to monitor and manage, and cost savings also associated with the consolidated Windows estate. The challenge encountered was with the authentication for these Windows apps, as without the gMSA feature, the applications would require a time-intensive refactor or leverage an EC2 based solution with management overhead. This raised questions about the commitment of AWS to Windows containers in the long term, and thankfully, with this release, it signals that Windows is not being sidelined.

#3 AWS Security Hub Gets Smarter

Third on the list is a 2-for-1 special because security and compliance is one of the most common areas our clients have come to us for help. Cloud gives builders all of the tools they need to build and run secure applications, but defining controls and ensuring their continual enforcement requires consistent and deliberate work. In response to this need we’ve seen AWS releasing more services that streamline activities for security operations teams. In that list of tools are Amazon GuardDuty, Amazon Macie, and, more recently, AWS Security Hub, which these two selections integrate with:

3a) AWS Identity and Access Management (IAM) Access Analyzer

“AWS IAM Access Analyzer generates comprehensive findings that identify resources that can be accessed from outside an AWS account. AWS IAM Access Analyzer does this by evaluating resource policies using mathematical logic and inference to determine the possible access paths allowed by the policies. AWS IAM Access Analyzer continuously monitors for new or updated policies, and it analyzes permissions granted using policies for their Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions.”

If you’ve worked with IAM, you know that without deliberate design and planning, it can become an unwieldy mess quickly. Disorganization with your IAM policies means you run the risk of creating inadvertent security holes in your infrastructure, which might not be immediately apparent. This new feature to AWS Security Hub streamlines the process for surfacing those latent IAM issues that may have otherwise gone unnoticed.

3b) Amazon Detective

“Amazon Detective is a new service in Preview that makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations.”

The result of Amazon’s acquisition of Sqrrl in 2018, Amazon Detective is another handy tool that helps separate the signal from the noise in the cacophony of cloud event data generated across accounts. What’s different about this service as compared to others like GuardDuty is that it builds relationship graphs which can be used to rapidly identify links (edges) between events (nodes). This is a powerful capability to have when investigating security events and the possible impact across your Cloud portfolio.

#2 EC2 Image Builder

“EC2 Image Builder is a service that makes it easier and faster to build and maintain secure images. Image Builder simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images.”

2nd Watch clients have needed an automated solution to “bake” consistent machine images for years, and our “Machine Image Factory” solution accelerator was developed to efficiently address the need using tools such as Hashicorp Packer, AWS CodeBuild, and AWS CodePipeline.

The reason this solution has been so popular is that by having your own library of images customized to your organizations requirements (eg, security configurations, operations tooling, patching), you can release applications faster, with greater consistency, and without burdening your teams’ time or focus watching installation progress bars when they can be working on higher business value activities.

What’s great about AWS releasing this capability as a native service offering is that it is making a best-practice pattern even more accessible to organizations without confusing the business outcome with an array of underlying tools being brought together to make it happen. If your team wants to get started with EC2 Image Builder but you need help with understanding how to get from your current “hand crafted” images to Image Builder’s recipes and tests, we can help!

#1 Outposts

“AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that need low latency access to on-premises applications or systems, local data processing, and to securely store sensitive customer data that needs to remain anywhere there is no AWS region, including inside company-controlled environments or countries.”

It’s 2019, and plants are now meat and AWS is hardware you can install in your datacenter. I will leave it to you to guess which topic has been more hotly debated on the 2nd Watch Slack, but amongst our clients, Outposts has made its way into many conversations since its announcement at re:Invent 2018. Coming out of last week’s announcement of Outposts GA, I think we will be seeing a lot more of this service in 2020.

One of the reasons I hear clients inquiring about Outposts is that it fills a gap for workloads with proximity or latency requirements to manufacturing plants or another type of strategic regional facility. This “hyper-local” need echoes the announcement for AWS Local Zones, which presents a footprint for AWS cloud resources targeting a specific geography (Los Angeles, CA initially).

Of course, regional datacenters and other hyperconverged platforms exist to run these types of workloads already, but what is so powerful about Outposts is that it brings the Cloud operations model back to the datacenter and the same cloud skills that your teams have developed and hired for don’t need to be stunted to learn a disparate set of skills on a niche hardware vendor platform that could be irrelevant 3 years from now.

I’m excited to see how these picks and all of the new services announced play out over the next year. There is a lot here for businesses to implement in their environments to drive down costs, improve visibility and security, and dial in performance for their differentiating workloads.

Head over to our Twitter account, @2ndWatch, if you think there should be others included in our top 5 list. We’d love to get your take!

-Joe Conlin, Solutions Architect

Facebooktwitterlinkedinmailrss

Demystifying DevOps

My second week at 2nd Watch, it happened.  I was minding my own business, working to build a framework for our products and how they’d be released when suddenly, an email dinged into my inbox. I opened it and gasped. It read in sum:

  • We want to standardize on our DevOps tooling
  • We have this right now https://landscape.cncf.io/images/landscape.png
  • We need to pick one from each bucket
  • We want to do this next week with everyone in the room, so we don’t create silos

This person sent me the CNCF Landscape and asked us to choose which tool to use from each area to enable “DevOps” in their org. Full stop.

I spent the better part of the day attempting to respond to the message without condescension or simply saying ‘no.’ I mean, the potential client had a point. He said they had a “planet of tools” and wanted to narrow it down to 1 for each area. A noble cause. They wanted a single resource from 2nd Watch who had used all the tools to help them make a decision. They had very honorable intentions while they were clearly misguided from the start.

The thing is, this situation hasn’t just come up once in my career in the DevOps space. It’s a rampant misunderstanding of the part tooling, automation, and standardization play into the DevOps transformation. Normalizing the tech stack across your enterprise wastes time and creates enemies. This is not DevOps.

In this blog, I’ll attempt to share my views on how to approach your transformation. Hint: We’re not spending time in the CNCF Landscape.

Let’s start with a definition.

DevOps is a set of cultural values and organizational practices that improve business outcomes by increasing collaboration and feedback between Business Stakeholders, Development, QA, IT Operations, and Security.

A true DevOps transformation includes an evolution of your company culture, automation and tooling, processes, collaboration, measurement systems, and organizational structure.

DevOps does not have an ‘end state’ or level of maturity. The goal is to create a culture of continuous improvement.

DevOps methods were initially formed to bridge the gap between Development and Operations so that teams could increase speed to delivery as well as quality of product at the same time. So, it does seem fair that many feel a tool can bridge this gap. However, without increasing communication and shaping company culture to enable reflection and improvement, tooling ends up being a one-time fix for a consistently changing and evolving challenge. Imagine trying to patch a hole in a boat with the best patch available while the rest of the crew is poking holes in the hull. That’s a good way to imagine how implementing tooling to fix cultural problems works.

Simply stated – No, you cannot implement DevOps by standardizing on tooling.

Why is DevOps so difficult?

Anytime you have competing priorities between departments you end up with conflict and difficulty. Development is focused on speed and new features while Operations is required to keep the systems up and running. The two teams have different goals, some of which undo each other.

For example: As the Dev team is trying to get more features to market faster, the Ops team is often causing a roadblock so that they can keep to their Service Level Agreement (SLA). An exciting new feature delivered by Dev may be unstable, causing downtime in Ops. When there isn’t a shared goal of uptime and availability, both teams suffer.

CHART: Dev vs. Ops

Breaking Down Walls

Many equate the goal of conflict between Dev and Ops to a wall. It’s a barrier between the two teams, and oftentimes Dev, in their effort to increase velocity, will throw the new feature or application over the proverbial wall to Ops and expect them to deal with running the app at scale.

Companies attempt to fix this problem by increasing communications between the two teams, though, without shared responsibility and goals, the “fast to market” Dev team and the “keep the system up and running” Ops team are still at odds.

CI/CD ≠ DevOps

I’ll admit, technology can enable your DevOps transformation, though don’t be fooled by the marketing that suggests simply implementing continuous integration and continuous delivery (CI/CD) tooling will kick-start your DevOps journey. Experience shows us that implementing automation without changing culture just results in speeding up our time to failure.

CI/CD can enable better communication, automate the toil or rework that exists, and make your developers and ops team happier. Though, it takes time and effort to train them on the tooling, evaluate processes and match or change your companies’ policies, and integrate security. One big area that is often overlooked is executive leadership acceptance and support of these changes.

The Role of Leadership in DevOps

The changes implemented during a DevOps Transformation all have a quantifiable impact on a company’s profit, productivity and market share. It is no wonder that organizational leaders and executives have impact on these changes. The challenge is their impact is often overlooked up front in the process causing a transformation to stall considerably and producing mixed results.

According to the book Accelerate: Building and Scaling High Performing Technology Organizations by Nicole Forsgren, PhD, Jez Humble, and Gene Kim, the “characteristics of transformational leadership are highly correlated with software delivery performance.” Leaders need to up their game by enabling cross-functional collaboration, building a climate of learning, and supporting effective use and choice of tools (not choosing for the team like we saw at the beginning of this blog).

Leaders should be asking themselves upfront “how can I create a culture of learning and where do we start?” The answer to this question is different for every organization, though some similarities exist. One of the best ways to figure out where to start is to figure out where you are right now.

2nd Watch offers a 2-week DevOps Assessment and Strategy engagement that helps you identify and assess a single value stream to pinpoint the areas for growth and transformation. This is a good start to identify key areas for improvement. This assessment follows the CALMS framework originally developed by Damon Edwards and John Willis at DevOpsDays 2010 and then enhanced by Jez Humble. It’s an acronym representing the fundamental elements of DevOps:  Culture, Automation, Lean, Metrics, and Sharing.

  • Culture – a culture of shared responsibility, owned by everyone
  • Automation – eliminating toil, make automation continuous
  • Lean – visualize work in progress, limit batch sizes and manage queue lengths
  • Metrics – collect data on everything and provide visibility into systems and data
  • Sharing – encourage ongoing communication, build channels to enable this

While looking at each of these pillars we overlay the people, process and technology aspects to ensure full coverage of the methodology.

  • People:We assess your current team organizational structure, culture, and the role security plays. We also assess what training may be required to get your team off on the right foot. ​
  • Process:We review your collaboration methods, processes and best practices. ​
  • Technology:We identify any gaps in your current tool pipeline including image creation, CI/CD, monitoring, alerting, log aggregation, security and compliance and configuration management. ​

What’s Next?

DevOps is over 10 years old and it’s moving out of the phase of concept and ‘nice to have’ into necessity for companies to keep up in the digital economy. New technology like serverless, containers, and machine learning and a focus on security is shifting the landscape, making it more essential that companies adopt the growth mindset expected in a DevOps culture.

Don’t be left behind. DevOps is no longer a fad diet, it’s a lifestyle change. And if you can’t get enough of Demystifying Devops, check out our webinar on ‘Demystifying Devops: Identifying Your Path to Faster Releases and Higher Quality’ on-demand.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

The Simple Path to AWS Managed Services (AMS): AWS re:Invent 2019 Breakout Session On-Demand

With a week full of sessions, bootcamps and extra-curriculars at AWS re:Invent, you might not have had time to make it to our breakout session.

Watch ‘The Simple Path to AMS’ On-Demand

Learn how to accelerate your journey to the cloud by using AWS Managed Services (AMS), including the process for assessing, migrating and operationalizing your infrastructure from your on-premise datacenter or existing cloud environment to AMS. Discover key steps to streamline this process using automation and infrastructure as code to set up network connectivity, access management, logging, monitoring, backups and configuration as well as integration points for an existing managed service provider to seamlessly work with AMS.

Facebooktwitterlinkedinmailrss

AWS re:Invent 2019: Daily Recap – Thursday

Thursday marked the last full day of AWS re:Invent 2019 and the morning after another outstanding 2nd Watch party. If you attended, it is understandable if you were unable to make Werner Vogels’ keynote address.  Have no fear, 2nd Watch’s Victoria Geronimo has recapped all the highlights for you in her blog post, or you can watch it here.  This year, Vogels focused more on how AWS builds to support microservices instead of on new announcements. As usual, his t-shirt choice was a huge topic of conversation.

It has been another great week here in Vegas, and again I am amazed at all the new and interesting people we get to talk to during this conference.  It is truly a global experience getting to talk to people from all over the world and some AWS Heroes.  I hope we got a chance to meet you at the 2nd Watch booth.  If you needed some relaxation time, AWS provided plenty of areas and opportunities to play including Broomball, Dodgeball and the final party, re:Play, which featured  Anderson Paak, as well as A-Trak, Jamestown Revival, Jen Lasher, Miya Folick, and STS9.

A few of the interesting announcements on Thursday included:

  • The Amazon Builders’ Library, which includes articles on how AWS architects and builds to support their own business.
  • Machine Learning Embark Program to help customers train their workforce in machine learning
  • Amazon Fraud Detector, a fully managed service that makes it easy to identify potentially fraudulent online activities such as online payment fraud and the creation of fake accounts
  • UltraWarm, a fully managed, low-cost, warm storage tier for Amazon Elasticsearch Service that takes a new approach to providing hot-warm tiering in Amazon Elasticsearch Service, offering up to 900TB of storage at almost a 90% cost reduction over existing options
  • Advanced Query Accelerator (AQUA) for Amazon Redshift is a new distributed and hardware-accelerated cache that enables Redshift to run up to 10x faster than any other cloud data warehouse

As usual, the announcements this week show that AWS continues to listen to its customers and release services to fill those needs.  There are still sessions going on today and thousands heading to the airport.  Travel safe and see everyone next year November 30 – December 4, 2020 in Las Vegas.

-Larry Cusick, Solutions Architect

Facebooktwitterlinkedinmailrss

2nd Watch Earns 2019 Cloud Services Provider of the Year Award

We are honored to announce that Channel Partner Insight (CPI) awarded 2nd Watch the Cloud Services Provider of the Year Award last night at its 2019 Channel Innovation Awards (CIA) ceremony in New York City. The Cloud Services Provider of the Year award is designed for solution providers that can demonstrate their skill and ability to drive client success and recognizes our work helping the largest brands adopt and use cloud infrastructure.

“Innovation in the channel is needed now more than ever,” said Josh Budd, editor of Channel Partner Insight. “The chronic skills gap is widening, yet channel partners are making great strides in enabling customers to modernize their infrastructures and develop smarter, more efficient services. We are excited to recognize 2nd Watch’s contributions as a leading channel player and look forward to monitoring their continued growth and success.”

The CIAs are for solution providers, distributors and vendors that are galvanizing the channel and advancing new ideas and opportunities. Independently run, the CIAs shine a spotlight on innovation and achievement in the North American channel over the past year.

Channel Partner Insight provides leaders of resellers, distributors, MSPs and other specialist consultancies with exclusive analysis of the fast-changing channel sector in Europe and the US.

Facebooktwitterlinkedinmailrss