Top 5 takeaways from AWS re:Invent 2019

AWS re:Invent always presents us with a cornucopia of new cloud capabilities to build with and be inspired by, so listing just a few of the top takeaways can be a real challenge.

There are the announcements that I would classify as “this is cool, I can’t wait to hack on this,” which for me, a MIDI-aficionado and ML-wannabe, would include DeepComposer. Then there are other announcements that fall in the “good to know in case I ever need it” bucket such as AWS LocalZones. And finally, there are those that jump out at us because “our clients have been asking for this, hallelujah!

I’m going to prioritize this list based on the latter group to start, but check back in a few months because, if my DeepComposer synthpop track drops on SoundCloud, I might want to revisit these rankings.

#5 AWS Compute Optimizer

“AWS Compute Optimizer uses machine learning techniques to analyze the history of resource consumption on your account and make well-articulated and actionable recommendations tailored to your resource usage.”

Our options for EC2 instance types continues to evolve and grow over time. These evolutions address optimizations for specialized workloads (e.g., the new Inf1 instances), which means better performance-to-cost for those types of workloads.

The challenge for 2nd Watch clients (and everyone else in the Cloud) is maintaining an up-to-date knowledge of the options available and continually applying the best instance types to the needs of their workloads on an ongoing basis. That is a lot of information to keep up on, understand, and manage, and you’re probably wondering, “how do other companies deal with this?”

The ones managing it best have tools (such as CloudHealth) to help, but cost optimization is an area that requires continual attention and experience to yield the best results. Where AWS Compute Optimizer will immediately add value is surfacing inefficiencies at zero cost of 3rd party tools to get started. You will need to have the CloudWatch agent installed to gather OS-level metrics for the best results, but this is a standard requirement for these types of tools. What remains to be seen in the coming months is how Compute Optimizer compares to the commercial 3rd party tools on the market in terms of uncovering overall savings opportunities. However, the obvious advantage for 3rd party tools remaining unaffected by this change will be their ability to optimize across multiple cloud service providers.

#4 Amazon ECS now supports Active Directory Authentication using Windows Accounts gMSA

“Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows ECS customers to authenticate and authorize their Windows containers with network resources using an Active Directory (AD). Customers can now easily use Integrated Windows Authentication with their Windows containers on ECS to secure services.”

This announcement was not part of any keynote, but thanks to fellow 2nd Watcher and Principal Cloud Consultant, Joey Yore, bringing it to my attention, it is definitely making my list. Over the course of the past year, several of our clients on a container adoption path for their .NET workloads were stymied by this very lack of Windows gMSA support.

Drivers for migrating these .NET apps from EC2 to containers includes easier blue/green deployments for faster time-to-market, simplified operations by minimizing overall Windows footprint to monitor and manage, and cost savings also associated with the consolidated Windows estate. The challenge encountered was with the authentication for these Windows apps, as without the gMSA feature, the applications would require a time-intensive refactor or leverage an EC2 based solution with management overhead. This raised questions about the commitment of AWS to Windows containers in the long term, and thankfully, with this release, it signals that Windows is not being sidelined.

#3 AWS Security Hub Gets Smarter

Third on the list is a 2-for-1 special because security and compliance is one of the most common areas our clients have come to us for help. Cloud gives builders all of the tools they need to build and run secure applications, but defining controls and ensuring their continual enforcement requires consistent and deliberate work. In response to this need we’ve seen AWS releasing more services that streamline activities for security operations teams. In that list of tools are Amazon GuardDuty, Amazon Macie, and, more recently, AWS Security Hub, which these two selections integrate with:

3a) AWS Identity and Access Management (IAM) Access Analyzer

“AWS IAM Access Analyzer generates comprehensive findings that identify resources that can be accessed from outside an AWS account. AWS IAM Access Analyzer does this by evaluating resource policies using mathematical logic and inference to determine the possible access paths allowed by the policies. AWS IAM Access Analyzer continuously monitors for new or updated policies, and it analyzes permissions granted using policies for their Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions.”

If you’ve worked with IAM, you know that without deliberate design and planning, it can become an unwieldy mess quickly. Disorganization with your IAM policies means you run the risk of creating inadvertent security holes in your infrastructure, which might not be immediately apparent. This new feature to AWS Security Hub streamlines the process for surfacing those latent IAM issues that may have otherwise gone unnoticed.

3b) Amazon Detective

“Amazon Detective is a new service in Preview that makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations.”

The result of Amazon’s acquisition of Sqrrl in 2018, Amazon Detective is another handy tool that helps separate the signal from the noise in the cacophony of cloud event data generated across accounts. What’s different about this service as compared to others like GuardDuty is that it builds relationship graphs which can be used to rapidly identify links (edges) between events (nodes). This is a powerful capability to have when investigating security events and the possible impact across your Cloud portfolio.

#2 EC2 Image Builder

“EC2 Image Builder is a service that makes it easier and faster to build and maintain secure images. Image Builder simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images.”

2nd Watch clients have needed an automated solution to “bake” consistent machine images for years, and our “Machine Image Factory” solution accelerator was developed to efficiently address the need using tools such as Hashicorp Packer, AWS CodeBuild, and AWS CodePipeline.

The reason this solution has been so popular is that by having your own library of images customized to your organizations requirements (eg, security configurations, operations tooling, patching), you can release applications faster, with greater consistency, and without burdening your teams’ time or focus watching installation progress bars when they can be working on higher business value activities.

What’s great about AWS releasing this capability as a native service offering is that it is making a best-practice pattern even more accessible to organizations without confusing the business outcome with an array of underlying tools being brought together to make it happen. If your team wants to get started with EC2 Image Builder but you need help with understanding how to get from your current “hand crafted” images to Image Builder’s recipes and tests, we can help!

#1 Outposts

“AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that need low latency access to on-premises applications or systems, local data processing, and to securely store sensitive customer data that needs to remain anywhere there is no AWS region, including inside company-controlled environments or countries.”

It’s 2019, and plants are now meat and AWS is hardware you can install in your datacenter. I will leave it to you to guess which topic has been more hotly debated on the 2nd Watch Slack, but amongst our clients, Outposts has made its way into many conversations since its announcement at re:Invent 2018. Coming out of last week’s announcement of Outposts GA, I think we will be seeing a lot more of this service in 2020.

One of the reasons I hear clients inquiring about Outposts is that it fills a gap for workloads with proximity or latency requirements to manufacturing plants or another type of strategic regional facility. This “hyper-local” need echoes the announcement for AWS Local Zones, which presents a footprint for AWS cloud resources targeting a specific geography (Los Angeles, CA initially).

Of course, regional datacenters and other hyperconverged platforms exist to run these types of workloads already, but what is so powerful about Outposts is that it brings the Cloud operations model back to the datacenter and the same cloud skills that your teams have developed and hired for don’t need to be stunted to learn a disparate set of skills on a niche hardware vendor platform that could be irrelevant 3 years from now.

I’m excited to see how these picks and all of the new services announced play out over the next year. There is a lot here for businesses to implement in their environments to drive down costs, improve visibility and security, and dial in performance for their differentiating workloads.

Head over to our Twitter account, @2ndWatch, if you think there should be others included in our top 5 list. We’d love to get your take!

-Joe Conlin, Solutions Architect

rss
Facebooktwitterlinkedinmail

EC2 Container Service (ECS) – A Docker Container Service for the Cloud

At AWS re:Invent, Amazon introduced its new EC2 Container Service (ECS). Although not available yet, it promises to be a vital part of the future of the AWS ecosystem. ECS is touted to be a high performance, highly scalable service that allows you to run distributed applications (in the form of Docker containers) on a fully managed cluster of EC2 instances. The main benefits of ECS as described by Amazon are: Easy Cluster Management, High Scale Performance, Flexible Scheduling, Extensible & Portable, Resource Efficiency, AWS Integration, and Security. All of these benefits help you easily build, run, and scale Docker containers in the cloud.

Is the concept of containers new to you? Let’s take a step back and talk about virtualization and the benefits of containers in terms of running web applications.

In its simplest most well-known form, classic computer virtualization is the process of separating the software layer (guest OS) with the hardware layer (physical server). The separation is facilitated by other layers of software (host OS and hypervisor) that act as the go-between. This gives you the ability to run multiple virtual machines on a single piece of physical hardware. This simple explanation is the basis for virtualization technologies including Amazon’s EC2 service.

Now let’s say you want to use the virtual infrastructure to run a web application. In a classic VM you are in charge of installing the OS. EC2 goes one step further than a classic VM as it provides you the virtual infrastructure with a vanilla OS. With EC2, when you fire up an instance you are given the choice of which operating system to run – Amazon Linux, Red Hat, Windows, etc. From there, the common steps needed to run a web application would be to build the application, install the needed binaries and libraries, and start the appropriate services. With a few changes to firewall rules or Security Groups, your application would be online. Congratulations you now have your application running!

So how does containerization help? I like to think of it as containerization takes virtualization one step further. Having the ability to run applications on individual virtual machines or instances is great but can become bulky and difficult to manage. An application that may be only 10-50 MBs still requires all of the binaries, libraries, and the entire guest operating system to function. This can easily require an additional 10-15 GBs, yes gigabytes, not megabytes, for the application to run on its own VM. If you want to run several applications, VM resources and administration overhead multiplies quickly. Containerization technologies like Docker have gained industry popularity for the ability to build, transport and run distributable applications in these smaller self-contained packages. A container includes just the application and needed dependencies. It runs as a separate isolated process on the host operating system and shares the kernel with other containers. This allows it to be highly portable and much more efficient by allowing multiple containerized applications to run on the same system. The beauty of it is that a Docker container is completely portable, so you can run it anywhere – like on a desktop computer, a physical server, VM, or EC2 instance – effectively facilitating faster deployments for development, QA, and production environments.

With ECS, Amazon aims to simplify managing containers even more by allowing you to run distributed applications on a managed cluster of EC2 instances. By having a managed cluster you can concentrate on your containerized applications and not cluster software or a configuration management systems to manage the infrastructure. This would be similar to how RDS is a fully managed database service that allows you to concentrate on your data and not the management and administration of the infrastructure that runs it. The light weight footprint of a container allows the environment to scale up and down quickly with demand, making it a perfect match for the elasticity of EC2. Additionally, AWS provides a set of simple APIs, so you have complete control of the cluster running your containers and the ability to extend and integrate with your current environment.

The initial announcement is definitely intriguing and something to watch closely. The service is currently in preview, but you can sign up for the waitlist here.

-Derek Baltazar – 2nd Watch Senior Cloud Engineer

rss
Facebooktwitterlinkedinmail