1-888-317-7920 info@2ndwatch.com

Improved Performance and Disaster Recovery with VMware Cloud on AWS

Even though public cloud adoption has become mainstream among enterprises, the heavily touted full cloud adoption has not become a reality for many companies, nor will it for quite some time.  Instead we see greater adoption of hybrid cloud, a mixture of public and private clouds, as the predominant deployment of IT servicesWith private cloud deployments largely consisting of market share leader, VMware, it gives even more credence to a VMware Cloud on AWS solution. 

Looking back 2 years to when VMware and AWS made the announcement that they had co-engineered a cloud solution, it makes a lot more sense, now.   That wasn’t necessarily always the case.  I’ll be among the first to admit that I failed to see how the two competitive solutions would coexist in a way that provided value to the customer.  But then again, I was fully drinking the cloud punch that said refactoring applications and deploying in a “cattle vs pets” mentality was necessary to enable a full-on digital transformation to merely survive in the evolving aaS world. 

What I was not considering was that more than 75% of private clouds were running on VMware.  Or that companies had made a significant investment into not only the licensing and tooling, but also in their people, to run VMware.  It would not have made sense to move everything to the cloud in many situations. 

I viewed it solely as a “lift and shift” opportunity.  It provided a means for companies to move their IT infrastructure out of the data center and “check the box” for fully migrating to the cloud while allowing for the gradual adoption of AWS cloud native solutions as they trained staff accordingly.   

While it is true that performing a complete data center evacuation is a common request with various factors influencing the decision, delaying cloud native is less of a driver.  Some companies are making the decision because they have been unsuccessful in renegotiating their contract with their colo-provider and find themselves in a tough situation resulting in the need to rapidly move or be locked-in for another lengthy contract.   In other situations, the CIO has decided that their valuable human capital would be better spent delivering higher value to their company as opposed to running a data center and converting from a CAPEX to OPEX model for their IT infrastructure works better for their business. 

However, there are two use cases that seem to be bigger drivers of VMware Cloud on AWS; the need for improved performance and disaster recovery.   

Aside from on-demand access to infrastructure, another big advantage of AWS is the sheer number of solutions they have created that become available to use in a matter of minutes and can be easily connected to your applications residing on VMware VMs. With VMware Hybrid Cloud Extension (HCX), moving applications between on-Premises VMware deployments and VMware Cloud on AWS deployments is seamless.  This allows your VMs be closer to the dependent AWS tooling to improve latency and may result in improved performance for your users. If you have a geographically disbursed user base, you can easily set up a VMware Cluster in a region much closer, further reducing latency.   

I do want to caution, though, that prior to performing a migration of your applications to VMware Cloud on AWS, you should create a dependency map of all your VMs in your on-premises environment.  It is necessary to have a thorough understanding of what other VMs your applications are communicating with.  We have seen numerous cases where proper identification of dependencies has not occurred, resulting is dissatisfaction when the application is moved to VMware Cloud on AWS but the SAP database remains on-premises.  So, while you may have brought the application closer to your users, performance could be impacted if the dependencies are not located nearby. 

The other use case that has been gaining adoption is the ability to have a disaster recovery environment.  With the severity of natural disasters occurring at what seems like an increased rate, there is a real threat that your business could be impacted with downtime.  VMware Cloud on AWS coupled with VMware Site Recovery Manager provides you an opportunity to put in place a business continuity plan in geographically diverse regions to help ensure that your business keeps running. 

The other exciting thing is that hybrid cloud no longer has to be located outside your data center.  VMware Cloud on AWS has gained such wide spread acceptance that, at AWS re:Invent 2019, VMware announced the opening of a VMware Cloud on AWS Outposts Beta program, which brings the popular features of AWS Cloud right into your data center to work alongside VMware.  This seems like it would be best for clients who need the benefits of VMware Cloud on AWS but have some data sovereignty issues or legacy applications that simply cannot migrate to off premise VMware Cloud. 

As one of only a handful of North American VMware Partners to possess the VMware Master Services Competency in VMware Cloud on AWS, 2nd Watch has performed numerous successful VMware Cloud on AWS Implementations.  We also support AWS Outposts, helping AWS customers overcome challenges that exist due to managing and supporting infrastructures both on-premises and in cloud environments, for a truly consistent hybrid experience.

If you want to understand how VMware Cloud on AWS can further enable your hybrid cloud adoption, schedule a VMware Cloud on AWS Workshop – a 4-hour, complimentary, on-site overview of VMware Cloud on AWS and appropriate use cases – to see if it is right for your business.  

-Dusty Simoni, Sr Product Manager

Facebooktwitterlinkedinmailrss

What do we love about 2nd Watch?

What do we love most about 2nd Watch? Find out what makes a great day at work for us and our favorite things about working at 2nd Watch!

Facebooktwitterlinkedinmailrss

Managed Cloud, AMS, and the Enterprise – The Hows and Whys

Read this article on ChannnelPartnerInsights

It’s easy to forget that when enterprises first started moving to the cloud, it was a largely simple process that saw only a handful of people within an organization using the technology. But as its usage has become more prevalent, on-site infrastructure and IT operations teams have found themselves having to manage cloud environments, which has not only created a skills gap in many enterprises, but also given rise to cost inefficiencies as teams have either become spread more thinly, or, more likely, organizations have had to hire additional staff to manage their cloud environments. All of this can be compounded by trying to successfully integrate a cloud environment into an existing operation’s security structure.

The good news is that as cloud offerings have developed, all of these challenges can be addressed by managed cloud services. They help remove additional costs by negating the need for additional staff, as well as removing the complexity of trying to run a cloud environment for a large enterprise that wants to focus on running its business rather than running its infrastructure.

As managed cloud services continue their reach into the mainstream, customers will need to be educated on the myriad benefits the offering presents. Services such as AWS Managed Services (AMS) can offer enterprises a much easier cloud experience that doesn’t have to impinge upon the day-to-day running of the business.

Why managed cloud?

For clients questioning why they would benefit from a managed cloud offering, the first thing to note is that there is a clear reduction in the operational costs of cloud to be found. Enterprises no longer have to hire staff or spend time training existing staff to manage their cloud infrastructure. Alongside this, with a managed cloud services offering, enterprises have direct access to a team with a high level of skill set in cloud services and who will handle that portion of the organization’s infrastructure. Aspects like logging, monitoring, event management, continuity management, security and access management, patching, provisioning, incidents and reporting are all included in a managed cloud service offering.

AMS in particular is a highly automated offering, meaning that implementation is straightforward and much quicker than regular cloud implementations. It also features out-of-the-box compliance, such as PCI, HIPAA and GDPR, meaning that security postures won’t be disrupted during or after implementation. The service’s automation also allows for requests for change to be done within minutes, versus having to wait for an in-house IT infrastructure team to approve something before it can be changed.

And managed cloud services can have a significant impact upon an enterprise’s operations. For example, one of our clients – an ISV – was experiencing considerable challenges when evolving its product into a SaaS offering. While it was able to service the product, it wasn’t able to service the cloud infrastructure hosting the SaaS product. Using a managed cloud service – in this case AMS – meant the organization no longer had to manage that infrastructure itself and has since been able to decrease its time to resolution, as well as its cost of operations.

Further, the change enabled the ISV to be able to better predict their cost of goods sold given that AMS is a relatively steady monthly statement. This allows ISVs to consistently measure margin on their SaaS product offering.

Making the move to AMS

Migrating to AMS from on-premise infrastructure or an existing AWS environment is a straightforward process that consists of four key stages:

  1. Discovering what exists today and what needs to migrate
  2. Identifying the architecture to migrate (single account or multi-account landing zone)
  3. Identifying the migration plan (scheduled app migration in ‘waves’)
  4. Migrating to AMS

For customers on alternative cloud infrastructures, such as Google Cloud or Microsoft Azure, the migration to AMS is similar. The only bit of heavy lifting (for customers on any cloud platform) can come in integrating an existing operations team with the AMS operations teams so that they know how to work together if there’s a request, an update, or a problem.

Preparing for and performing this people-and-process integration upfront considerably reduces the complexity of cloud operations. This merger of operations usually flows from discovery and doesn’t end until the migration has been tested and the team is operating efficiently.

The path to AMS is a very structured, concrete process, which means clients don’t have to make myriad new decisions on their own. The onboarding process is streamlined and enables us as AMS partners to provide a true timeline for onboarding – something that can often be difficult when you’re dealing with a very large cloud migration.

For example, with AMS we know that discovery and planning take about three weeks, and building out the AMS landing zone takes about three weeks, and you can’t run these steps concurrently. We’ve received client feedback telling us that offering these timescales has been key to their comfort in engaging with this process and knowing they can get it done – clients don’t want an open-ended project that takes six years to migrate.

When it comes down to it, the cloud goals for the majority of customers is to streamline business processes and, ultimately, improve their bottom line. Using a managed cloud service like AMS can reduce costs, reduce operational challenges and increase security, making for a much smoother and easier experience for the enterprise, and a lucrative, open-ended opportunity for channel partners.

-Contributed article by Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss

Completing Your Company’s Cloud Transformation with Azure Windows Virtual Desktop Foundations

The completion of your IT transformation from data center to full cloud adoption can often be hindered by your desktop administration.  While in the past virtual desktops have largely delivered on their promise to bring standardization, reduce the proliferation of applications and simplify desktop management, they have also had their share of challenges.  As an administrator you had few options to:

  • Create multiple VM pools with customized images for different user roles
  • Overload virtual machine images with more apps than needed and hide or block them from the user making the image bigger
  • Utilize dynamic app streaming which required additional infrastructure to be managed.

With Windows Virtual Desktop, Microsoft Azure has transformed virtual desktop delivery by completely separating the user profile data and application delivery from the Operating System to deliver a user experience that parallels that of a physical device, simplifies desktop administration further, and maintains the management of the underlying physical infrastructure.

Benefits of adopting Azure Windows Virtual Desktops (WVD):

  • Support for Windows 10 and Windows 7 virtual desktops – the only way to safely run Windows 7 after its End of Life (Jan. 14, 2020)
  • No need to overprovision hardware by aligning costs to business needs – transition from costly CAPEX hardware purchases to OPEX cloud consumption-based model
  • Simplify user administration by using Azure Active Directory (AAD) – leverage additional security controls like multifactor authentication (MFA) or conditional access
  • Highly secure with reverse connect technology – eliminates the need to open inbound ports to the VMs, and all user sessions are isolated in both single and multi-session environments
  • Utilize Microsoft Azure native services – Azure Files for file share and Azure NetApp Files for volume level backups

To help you with the transition from standard desktops or an existing on-premises RDS deployment to Microsoft Azure, 2nd Watch has developed Windows Virtual Desktop Foundations.  Windows Virtual Desktop Foundations provides you the blueprints necessary to set up the WVD environment, integrate with Azure native services, create a base Windows image, and train your team on how to create custom images.

With 2nd Watch Windows Virtual Desktop Foundations, you get:

  • Windows Virtual Desktop environment setup
  • Integration with Azure native services (AAD and AZ Storage for profiles)
  • Image build process set-up
  • A baseline custom Windows image
  • Team training on creating custom images
  • AZ Monitor setup for alerting

To learn more about our WVD Foundations, download our datasheet.

-Dusty Simoni, Sr Product Manager

Facebooktwitterlinkedinmailrss

AWS Outposts Overview – Deep Dive

AWS Outposts are fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises, while seamlessly connecting to AWS’ broad array of services in the cloud. Here’s a deeper look at the service.

As an AWS Outposts Partner, 2nd Watch is able to help AWS customers overcome challenges that exist due to managing and supporting infrastructures both on-premises and in cloud environments and delivering positive outcomes at scale. Our team is dedicated to helping companies achieve their technology goals by leveraging the agility, breadth of services, and pace of innovation that AWS provides. Read more

Facebooktwitterlinkedinmailrss

Optimizing your environment using AWS Savings Plans

Surprisingly, AWS has very quietly released a major enhancement/overhaul to purchasing compute resources up front. To date, purchasing Reserved Instances (Standard or Convertible) has offered AWS users great savings for their static workloads. This works because static workloads tend to utilize a set number of resources and RIs are paid for in advance, thereby justifying the financial commitment. That said, how often do today’s business needs remain constant, particularly with today’s product development? So, until now, you had two choices if you couldn’t use your RIs: take the loss and let the RI term run out or undertake the hassle of selling it on the marketplace (potentially for a loss). AWS Savings Plans, on the other hand, provide a gigantic leap forward in solving this problem. In fact, you will find that these AWS Savings Plans provide far more flexibility and return for your investment than the standard RI model.

Here is the gist of the AWS Savings Plans program, taken from the AWS site:

AWS Savings Plans is a flexible pricing model that provides savings of up to 72% on your AWS compute usage. This pricing model offers lower prices on Amazon EC2 instances usage, regardless of instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate usage.

AWS Savings Plans offer significant savings over On Demand, just like EC2 Reserved Instances, in exchange for a commitment to use a specific amount of compute power (measured in $/hour) for a one- or three-year period. You can sign up for Savings Plans for a 1- or 3-year term and easily manage your plans by taking advantage of recommendations, performance reporting and budget alerts in the AWS Cost Explorer. (Jeff Barr, AWS, 11.06.2019)

This is HUGE for AWS clients, because now, for the first time ever, savings can also be applied to workloads that leverage serverless containers—as well as traditional EC2 instances!

Currently there are two AWS Savings Plans, and here’s how they compare:

EC2 Instance Savings Plan Compute Savings Plan
Offers discount levels up to 72% off on-demand rates (same as RIs). Offers discount levels up to 66% off on-demand rates (the same rate as Convertible RIs).
Any changes in instances are restricted to the same AWS region. Spans regions. This could be a huge draw for companies with need for regional or national coverage.
Restricts EC2 instance types to the same family, but allows change in instance size and OS (e.g., t3.medium to t3.2xlarge). More flexible. Does not limit EC2 instance families or OS, and therefore, you are no longer locked into a specific instance family at the moment of purchase, as you would be with a traditional RI.
EC2 instances only: Similar to convertible RIs, this plan allows you to increase instance size, with a new twist: you can also reduce instance size! Yes, this means you may no longer have to sell your unused RIs on the marketplace! Allows clients to mix-and-match AWS products, such as EC2 and Fargate; extremely beneficial for clients who use a range of environments for their workloads.
BOTTOM LINE: Slightly less flexible, but you garner a greater discount. BOTTOM LINE: More flexible, but with less of a discount.

As with standard RI purchases, understanding your workloads will be key to determining when to use AWS Savings Plans vs. standard RIs (RIs aren’t going anywhere, but we recommend that Savings Plans be used in place of RIs moving forward) vs. On-Demand (including analysis of potential savings from auto-parking, seasonality, elasticity, and so on). Sound a bit overwhelming? Fear not! This is where 2nd Watch’s Cloud Optimization service excels! Enrollment starts with a full analysis of your organization’s usage, AWS environment, and any other requirements/restrictions your organization may have. The final result is a detailed report, expertly determined by our AWS-certified optimization engineers, with our savings findings and recommendations—customized just for you!

Due to the nature of AWS Savings Plans, they will bring the most immediate value to clients who are either new to AWS or don’t have any RI commitments currently on their account. This is due to the fact that AWS Savings Plans cannot, unfortunately, replace existing RI purchases. Whatever your goals, our optimization experts are ready to help you plan the most strategically efficient and cost effective “next step” of your cloud transformation.

And that’s just the beginning

If you think that AWS Savings Plans may benefit your new or existing AWS deployment, contact us to jumpstart an analysis.

-Jeff Collins, Cloud Optimization Product Management

Facebooktwitterlinkedinmailrss

AWS re:Invent 2019: AWS Product/Service Review, a Networking Perspective

Announcements for days!

AWS re:Invent 2019 has come and gone, and now the collective audience has to sort through the massive list of AWS announcements released at the event.  According to the AWS re:Invent 2019 Recap communication, AWS released 77 products, features and services in just 5 days!  Many of the announcements were in the Machine Learning (ML) space (20 total), closely followed by announcements around Compute (16 total), Analytics (6 total), Networking and Content Delivery (5 total), and AWS Partner Network (5 total), amongst others.   In the area of ML, things like AWS DeepComposer, Amazon SageMaker Studio, and Amazon Fraud Detector topped the list.  While in the Compute, Analytics, and Networking space, Amazon EC2 Inf1 Instances, AWS Local Zones, AWS Outposts, Amazon Redshift Data lake, AWS Transit Gateway Network Manager, and Inter-Region Peering were at the forefront. Here at 2nd Watch we love the cutting-edge ML feature announcements like everyone else, but we always have our eye on those announcements that key-in on what our customers need now – announcements that can have an immediate benefit for our customers in their ongoing cloud journey.

All About the Network

In Matt Lehwess’ presentation, Advanced VPC design and new capabilities for Amazon VPC, he kicked off the discussion with a poignant note of, “Networking is the foundation of everything, it’s how you build things on AWS, you start with an Amazon VPC and build up from there. Networking is really what underpins everything we do in AWS.  All the services rely on Networking.” This statement strikes a chord here at 2nd Watch as we have seen that sentiment in action. Over the last couple years, our customers have been accelerating the use of VPCs, and, as of 2018, Amazon VPCs is the number one AWS service used by our customers, with 100% of them using it. We look for that same trend to continue as 2019 comes to an end.  It’s not the sexiest part of AWS, but networking provides the foundation that brings all of the other services together.  So, focusing on newer and more efficient networking tools and architectures to get services to communicate is always at the top of the list when we look at new announcements.  Here are our takes on these key announcements.

AWS Transit Gateway Inter-Region Peering (Multi-Region)

One exciting feature announcement in the networking space is Inter-Region Peering for AWS Transit Gateway.  This feature allows the ability to establish peering connections between Transit Gateways in different AWS Regions.  Previously, connectivity between two Transit Gateways could only be done through a Transit VPC which included the overhead of running your own networking devices as part of the Transit VPC.   Inter-Region peering for AWS Transit Gateway enables you to remove the Transit VPC and connect Transit Gateways directly.

The solution uses a new static attachment type called a Transit Gateway Peering Attachment that, once created, requires an acceptance or rejection from the accepter Transit Gateway.  In the future, AWS will likely allow dynamic attachments, so they advise you to create unique ASNs for each Transit Gateway for the easiest transition.  The solution also uses encrypted VPC peering across the AWS backbone.  Currently Transit Gateway inter-region peering support is available for gateways in US East (Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions with support for other regions coming soon.  You also can’t peer Transit Gateways in the same region.

(Source: Matt Lehwess: Advanced VPC design and new capabilities for Amazon VPC (NET305))

On the surface the ability to connect two Transit Gateways is just an incremental additional feature, but when you start to think of the different use cases as well as the follow-on announcement of Multi-Region Transit Gateway peering and Accelerated VPN solutions, the options for architecture really open up.  This effectively enables you to create a private and highly-performant global network on top of the AWS backbone.  Great stuff!

AWS Transit Gateway Network Manager

This new feature is used to centrally monitor your global network across AWS and on premises. The Transit Gateway network manager simplifies operational complexity of managing networks across regions and remote locations.  This AWS feature is another to take a dashboard approach to provide a simpler overview of your resources that may be spread over several regions and accounts. To use it, you create a Global Network within the tool which is an object in the AWS Transit Gateway Network Manager service that represents your private global network in AWS. It includes your AWS Transit Gateway hubs, their attachments, and on-premises devices, sites, and links.  Once the Global Network is created, you extend the configuration by adding in Transit Gateways, information about your on-premises devices, sites, links, and the Site-to-Site VPN connections with which they are associated, and start using it to visualize and monitor your network. It includes a nice geographic world map view to visualize VPNs (if they’re up/down impaired) or Transit Gateway Peering connections.

https://d1.awsstatic.com/re19/gix/gorgraphic.cdb99cd59ba34015eccc4ce5eb4b657fdf5d9dd6.png

There’s also a nice Topology feature that shows VPCs, VPNs, Direct Connect gateways, and AWS Transit Gateway-AWS Transit Gateway peering for all registered Transit gateways.  It provides an easier way to understand your entire global infrastructure from a single view.

Another key feature is the integration with SD-WAN providers like Cisco, Aviatrix, and others. Many of these solutions will integrate with AWS Transit Gateway Network Manager and automate the branch-cloud connectivity and provide end-to-end monitoring of the global network from a single dashboard. It’s something we look forward to exploring with these SD-WAN providers in the future.

AWS Local Zones

AWS Local Zones in an interesting new service that addresses challenges we’ve encountered with customers.  Although listed under Compute and not Networking and Content Delivery on the re:Invent 2019 announcement list, Local Zones is a powerful new feature with networking at its core.

Latency tolerance for applications stacks running in a hybrid scenario (i.e. app servers in AWS, database on-prem) is a standard conversation when planning a migration.  Historically, those conversations would be predicated by their proximity to an AWS region.  Depending on requirements, customers in Portland, Oregon may have the option to run a hybrid application stack, where those in Southern California may have been excluded.  The announcement of Local Zones (initially just in Los Angeles) opens up those options to markets that were not previously available.  I hope this is the first of many localized resource deployments.

That’s no Region…that’s a Local Zone

Local Zones are interesting in that they only have a subset of the services available in a standard region.  Local Zones are organized as a child of a parent region, notably the Los Angeles Local Zone is a child of the Oregon Region.  API communication is done through Oregon, and even the name of the LA Local Zone AZ maps to Oregon (Oregon AZ1= us-west-2a, Los Angeles AZ1 = us-west-2-lax-1a).  Organizationally, it’s easiest to think of them as remote Availability Zones of existing regions.

As of December 2019, only a limited amount of services are available, including EC2, EBS, FSx, ALB, VPC and single-zone RDS.  Pricing seems to be roughly 20% higher than in the parent region.  Given that this is the first Local Zone, we don’t know whether this will always be true or if it depends on location.  One would assume that Los Angeles would be a higher-cost location whether it was a Local Zone or full region.

All the Things

To see all of the things that were launched at re:Invent 2019 you can check out the re:Invent 2019 Announcement Page. For all AWS announcements, not just re:Invent 2019 launches (e.g. Things that launched just prior to re:Invent), check out the What’s New with AWS webpage. If you missed the show completely or just want to re-watch your favorite AWS presenters, you can see many of the re:Invent presentations on the AWS Events Youtube Channel. After you’ve done all that research and watched all those videos and are ready to get started, you can always reach out to us at 2nd Watch. We’d love to help!

-Derek Baltazar, Managing Consultant

-Travis Greenstreet, Principal Architect

Facebooktwitterlinkedinmailrss

Top 5 takeaways from AWS re:Invent 2019

AWS re:Invent always presents us with a cornucopia of new cloud capabilities to build with and be inspired by, so listing just a few of the top takeaways can be a real challenge.

There are the announcements that I would classify as “this is cool, I can’t wait to hack on this,” which for me, a MIDI-aficionado and ML-wannabe, would include DeepComposer. Then there are other announcements that fall in the “good to know in case I ever need it” bucket such as AWS LocalZones. And finally, there are those that jump out at us because “our clients have been asking for this, hallelujah!

I’m going to prioritize this list based on the latter group to start, but check back in a few months because, if my DeepComposer synthpop track drops on SoundCloud, I might want to revisit these rankings.

#5 AWS Compute Optimizer

“AWS Compute Optimizer uses machine learning techniques to analyze the history of resource consumption on your account and make well-articulated and actionable recommendations tailored to your resource usage.”

Our options for EC2 instance types continues to evolve and grow over time. These evolutions address optimizations for specialized workloads (e.g., the new Inf1 instances), which means better performance-to-cost for those types of workloads.

The challenge for 2nd Watch clients (and everyone else in the Cloud) is maintaining an up-to-date knowledge of the options available and continually applying the best instance types to the needs of their workloads on an ongoing basis. That is a lot of information to keep up on, understand, and manage, and you’re probably wondering, “how do other companies deal with this?”

The ones managing it best have tools (such as CloudHealth) to help, but cost optimization is an area that requires continual attention and experience to yield the best results. Where AWS Compute Optimizer will immediately add value is surfacing inefficiencies at zero cost of 3rd party tools to get started. You will need to have the CloudWatch agent installed to gather OS-level metrics for the best results, but this is a standard requirement for these types of tools. What remains to be seen in the coming months is how Compute Optimizer compares to the commercial 3rd party tools on the market in terms of uncovering overall savings opportunities. However, the obvious advantage for 3rd party tools remaining unaffected by this change will be their ability to optimize across multiple cloud service providers.

#4 Amazon ECS now supports Active Directory Authentication using Windows Accounts gMSA

“Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows ECS customers to authenticate and authorize their Windows containers with network resources using an Active Directory (AD). Customers can now easily use Integrated Windows Authentication with their Windows containers on ECS to secure services.”

This announcement was not part of any keynote, but thanks to fellow 2nd Watcher and Principal Cloud Consultant, Joey Yore, bringing it to my attention, it is definitely making my list. Over the course of the past year, several of our clients on a container adoption path for their .NET workloads were stymied by this very lack of Windows gMSA support.

Drivers for migrating these .NET apps from EC2 to containers includes easier blue/green deployments for faster time-to-market, simplified operations by minimizing overall Windows footprint to monitor and manage, and cost savings also associated with the consolidated Windows estate. The challenge encountered was with the authentication for these Windows apps, as without the gMSA feature, the applications would require a time-intensive refactor or leverage an EC2 based solution with management overhead. This raised questions about the commitment of AWS to Windows containers in the long term, and thankfully, with this release, it signals that Windows is not being sidelined.

#3 AWS Security Hub Gets Smarter

Third on the list is a 2-for-1 special because security and compliance is one of the most common areas our clients have come to us for help. Cloud gives builders all of the tools they need to build and run secure applications, but defining controls and ensuring their continual enforcement requires consistent and deliberate work. In response to this need we’ve seen AWS releasing more services that streamline activities for security operations teams. In that list of tools are Amazon GuardDuty, Amazon Macie, and, more recently, AWS Security Hub, which these two selections integrate with:

3a) AWS Identity and Access Management (IAM) Access Analyzer

“AWS IAM Access Analyzer generates comprehensive findings that identify resources that can be accessed from outside an AWS account. AWS IAM Access Analyzer does this by evaluating resource policies using mathematical logic and inference to determine the possible access paths allowed by the policies. AWS IAM Access Analyzer continuously monitors for new or updated policies, and it analyzes permissions granted using policies for their Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions.”

If you’ve worked with IAM, you know that without deliberate design and planning, it can become an unwieldy mess quickly. Disorganization with your IAM policies means you run the risk of creating inadvertent security holes in your infrastructure, which might not be immediately apparent. This new feature to AWS Security Hub streamlines the process for surfacing those latent IAM issues that may have otherwise gone unnoticed.

3b) Amazon Detective

“Amazon Detective is a new service in Preview that makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations.”

The result of Amazon’s acquisition of Sqrrl in 2018, Amazon Detective is another handy tool that helps separate the signal from the noise in the cacophony of cloud event data generated across accounts. What’s different about this service as compared to others like GuardDuty is that it builds relationship graphs which can be used to rapidly identify links (edges) between events (nodes). This is a powerful capability to have when investigating security events and the possible impact across your Cloud portfolio.

#2 EC2 Image Builder

“EC2 Image Builder is a service that makes it easier and faster to build and maintain secure images. Image Builder simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images.”

2nd Watch clients have needed an automated solution to “bake” consistent machine images for years, and our “Machine Image Factory” solution accelerator was developed to efficiently address the need using tools such as Hashicorp Packer, AWS CodeBuild, and AWS CodePipeline.

The reason this solution has been so popular is that by having your own library of images customized to your organizations requirements (eg, security configurations, operations tooling, patching), you can release applications faster, with greater consistency, and without burdening your teams’ time or focus watching installation progress bars when they can be working on higher business value activities.

What’s great about AWS releasing this capability as a native service offering is that it is making a best-practice pattern even more accessible to organizations without confusing the business outcome with an array of underlying tools being brought together to make it happen. If your team wants to get started with EC2 Image Builder but you need help with understanding how to get from your current “hand crafted” images to Image Builder’s recipes and tests, we can help!

#1 Outposts

“AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that need low latency access to on-premises applications or systems, local data processing, and to securely store sensitive customer data that needs to remain anywhere there is no AWS region, including inside company-controlled environments or countries.”

It’s 2019, and plants are now meat and AWS is hardware you can install in your datacenter. I will leave it to you to guess which topic has been more hotly debated on the 2nd Watch Slack, but amongst our clients, Outposts has made its way into many conversations since its announcement at re:Invent 2018. Coming out of last week’s announcement of Outposts GA, I think we will be seeing a lot more of this service in 2020.

One of the reasons I hear clients inquiring about Outposts is that it fills a gap for workloads with proximity or latency requirements to manufacturing plants or another type of strategic regional facility. This “hyper-local” need echoes the announcement for AWS Local Zones, which presents a footprint for AWS cloud resources targeting a specific geography (Los Angeles, CA initially).

Of course, regional datacenters and other hyperconverged platforms exist to run these types of workloads already, but what is so powerful about Outposts is that it brings the Cloud operations model back to the datacenter and the same cloud skills that your teams have developed and hired for don’t need to be stunted to learn a disparate set of skills on a niche hardware vendor platform that could be irrelevant 3 years from now.

I’m excited to see how these picks and all of the new services announced play out over the next year. There is a lot here for businesses to implement in their environments to drive down costs, improve visibility and security, and dial in performance for their differentiating workloads.

Head over to our Twitter account, @2ndWatch, if you think there should be others included in our top 5 list. We’d love to get your take!

-Joe Conlin, Solutions Architect

Facebooktwitterlinkedinmailrss

Demystifying DevOps

My second week at 2nd Watch, it happened.  I was minding my own business, working to build a framework for our products and how they’d be released when suddenly, an email dinged into my inbox. I opened it and gasped. It read in sum:

  • We want to standardize on our DevOps tooling
  • We have this right now https://landscape.cncf.io/images/landscape.png
  • We need to pick one from each bucket
  • We want to do this next week with everyone in the room, so we don’t create silos

This person sent me the CNCF Landscape and asked us to choose which tool to use from each area to enable “DevOps” in their org. Full stop.

I spent the better part of the day attempting to respond to the message without condescension or simply saying ‘no.’ I mean, the potential client had a point. He said they had a “planet of tools” and wanted to narrow it down to 1 for each area. A noble cause. They wanted a single resource from 2nd Watch who had used all the tools to help them make a decision. They had very honorable intentions while they were clearly misguided from the start.

The thing is, this situation hasn’t just come up once in my career in the DevOps space. It’s a rampant misunderstanding of the part tooling, automation, and standardization play into the DevOps transformation. Normalizing the tech stack across your enterprise wastes time and creates enemies. This is not DevOps.

In this blog, I’ll attempt to share my views on how to approach your transformation. Hint: We’re not spending time in the CNCF Landscape.

Let’s start with a definition.

DevOps is a set of cultural values and organizational practices that improve business outcomes by increasing collaboration and feedback between Business Stakeholders, Development, QA, IT Operations, and Security.

A true DevOps transformation includes an evolution of your company culture, automation and tooling, processes, collaboration, measurement systems, and organizational structure.

DevOps does not have an ‘end state’ or level of maturity. The goal is to create a culture of continuous improvement.

DevOps methods were initially formed to bridge the gap between Development and Operations so that teams could increase speed to delivery as well as quality of product at the same time. So, it does seem fair that many feel a tool can bridge this gap. However, without increasing communication and shaping company culture to enable reflection and improvement, tooling ends up being a one-time fix for a consistently changing and evolving challenge. Imagine trying to patch a hole in a boat with the best patch available while the rest of the crew is poking holes in the hull. That’s a good way to imagine how implementing tooling to fix cultural problems works.

Simply stated – No, you cannot implement DevOps by standardizing on tooling.

Why is DevOps so difficult?

Anytime you have competing priorities between departments you end up with conflict and difficulty. Development is focused on speed and new features while Operations is required to keep the systems up and running. The two teams have different goals, some of which undo each other.

For example: As the Dev team is trying to get more features to market faster, the Ops team is often causing a roadblock so that they can keep to their Service Level Agreement (SLA). An exciting new feature delivered by Dev may be unstable, causing downtime in Ops. When there isn’t a shared goal of uptime and availability, both teams suffer.

CHART: Dev vs. Ops

Breaking Down Walls

Many equate the goal of conflict between Dev and Ops to a wall. It’s a barrier between the two teams, and oftentimes Dev, in their effort to increase velocity, will throw the new feature or application over the proverbial wall to Ops and expect them to deal with running the app at scale.

Companies attempt to fix this problem by increasing communications between the two teams, though, without shared responsibility and goals, the “fast to market” Dev team and the “keep the system up and running” Ops team are still at odds.

CI/CD ≠ DevOps

I’ll admit, technology can enable your DevOps transformation, though don’t be fooled by the marketing that suggests simply implementing continuous integration and continuous delivery (CI/CD) tooling will kick-start your DevOps journey. Experience shows us that implementing automation without changing culture just results in speeding up our time to failure.

CI/CD can enable better communication, automate the toil or rework that exists, and make your developers and ops team happier. Though, it takes time and effort to train them on the tooling, evaluate processes and match or change your companies’ policies, and integrate security. One big area that is often overlooked is executive leadership acceptance and support of these changes.

The Role of Leadership in DevOps

The changes implemented during a DevOps Transformation all have a quantifiable impact on a company’s profit, productivity and market share. It is no wonder that organizational leaders and executives have impact on these changes. The challenge is their impact is often overlooked up front in the process causing a transformation to stall considerably and producing mixed results.

According to the book Accelerate: Building and Scaling High Performing Technology Organizations by Nicole Forsgren, PhD, Jez Humble, and Gene Kim, the “characteristics of transformational leadership are highly correlated with software delivery performance.” Leaders need to up their game by enabling cross-functional collaboration, building a climate of learning, and supporting effective use and choice of tools (not choosing for the team like we saw at the beginning of this blog).

Leaders should be asking themselves upfront “how can I create a culture of learning and where do we start?” The answer to this question is different for every organization, though some similarities exist. One of the best ways to figure out where to start is to figure out where you are right now.

2nd Watch offers a 2-week DevOps Assessment and Strategy engagement that helps you identify and assess a single value stream to pinpoint the areas for growth and transformation. This is a good start to identify key areas for improvement. This assessment follows the CALMS framework originally developed by Damon Edwards and John Willis at DevOpsDays 2010 and then enhanced by Jez Humble. It’s an acronym representing the fundamental elements of DevOps:  Culture, Automation, Lean, Metrics, and Sharing.

  • Culture – a culture of shared responsibility, owned by everyone
  • Automation – eliminating toil, make automation continuous
  • Lean – visualize work in progress, limit batch sizes and manage queue lengths
  • Metrics – collect data on everything and provide visibility into systems and data
  • Sharing – encourage ongoing communication, build channels to enable this

While looking at each of these pillars we overlay the people, process and technology aspects to ensure full coverage of the methodology.

  • People:We assess your current team organizational structure, culture, and the role security plays. We also assess what training may be required to get your team off on the right foot. ​
  • Process:We review your collaboration methods, processes and best practices. ​
  • Technology:We identify any gaps in your current tool pipeline including image creation, CI/CD, monitoring, alerting, log aggregation, security and compliance and configuration management. ​

What’s Next?

DevOps is over 10 years old and it’s moving out of the phase of concept and ‘nice to have’ into necessity for companies to keep up in the digital economy. New technology like serverless, containers, and machine learning and a focus on security is shifting the landscape, making it more essential that companies adopt the growth mindset expected in a DevOps culture.

Don’t be left behind. DevOps is no longer a fad diet, it’s a lifestyle change. And if you can’t get enough of Demystifying Devops, check out our webinar on ‘Demystifying Devops: Identifying Your Path to Faster Releases and Higher Quality’ on-demand.

-Stefana Muller, Sr Product Manager

Facebooktwitterlinkedinmailrss