Top 5 takeaways from AWS re:Invent 2019

AWS re:Invent always presents us with a cornucopia of new cloud capabilities to build with and be inspired by, so listing just a few of the top takeaways can be a real challenge.

There are the announcements that I would classify as “this is cool, I can’t wait to hack on this,” which for me, a MIDI-aficionado and ML-wannabe, would include DeepComposer. Then there are other announcements that fall in the “good to know in case I ever need it” bucket such as AWS LocalZones. And finally, there are those that jump out at us because “our clients have been asking for this, hallelujah!

I’m going to prioritize this list based on the latter group to start, but check back in a few months because, if my DeepComposer synthpop track drops on SoundCloud, I might want to revisit these rankings.

#5 AWS Compute Optimizer

“AWS Compute Optimizer uses machine learning techniques to analyze the history of resource consumption on your account and make well-articulated and actionable recommendations tailored to your resource usage.”

Our options for EC2 instance types continues to evolve and grow over time. These evolutions address optimizations for specialized workloads (e.g., the new Inf1 instances), which means better performance-to-cost for those types of workloads.

The challenge for 2nd Watch clients (and everyone else in the Cloud) is maintaining an up-to-date knowledge of the options available and continually applying the best instance types to the needs of their workloads on an ongoing basis. That is a lot of information to keep up on, understand, and manage, and you’re probably wondering, “how do other companies deal with this?”

The ones managing it best have tools (such as CloudHealth) to help, but cost optimization is an area that requires continual attention and experience to yield the best results. Where AWS Compute Optimizer will immediately add value is surfacing inefficiencies at zero cost of 3rd party tools to get started. You will need to have the CloudWatch agent installed to gather OS-level metrics for the best results, but this is a standard requirement for these types of tools. What remains to be seen in the coming months is how Compute Optimizer compares to the commercial 3rd party tools on the market in terms of uncovering overall savings opportunities. However, the obvious advantage for 3rd party tools remaining unaffected by this change will be their ability to optimize across multiple cloud service providers.

#4 Amazon ECS now supports Active Directory Authentication using Windows Accounts gMSA

“Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows ECS customers to authenticate and authorize their Windows containers with network resources using an Active Directory (AD). Customers can now easily use Integrated Windows Authentication with their Windows containers on ECS to secure services.”

This announcement was not part of any keynote, but thanks to fellow 2nd Watcher and Principal Cloud Consultant, Joey Yore, bringing it to my attention, it is definitely making my list. Over the course of the past year, several of our clients on a container adoption path for their .NET workloads were stymied by this very lack of Windows gMSA support.

Drivers for migrating these .NET apps from EC2 to containers includes easier blue/green deployments for faster time-to-market, simplified operations by minimizing overall Windows footprint to monitor and manage, and cost savings also associated with the consolidated Windows estate. The challenge encountered was with the authentication for these Windows apps, as without the gMSA feature, the applications would require a time-intensive refactor or leverage an EC2 based solution with management overhead. This raised questions about the commitment of AWS to Windows containers in the long term, and thankfully, with this release, it signals that Windows is not being sidelined.

#3 AWS Security Hub Gets Smarter

Third on the list is a 2-for-1 special because security and compliance is one of the most common areas our clients have come to us for help. Cloud gives builders all of the tools they need to build and run secure applications, but defining controls and ensuring their continual enforcement requires consistent and deliberate work. In response to this need we’ve seen AWS releasing more services that streamline activities for security operations teams. In that list of tools are Amazon GuardDuty, Amazon Macie, and, more recently, AWS Security Hub, which these two selections integrate with:

3a) AWS Identity and Access Management (IAM) Access Analyzer

“AWS IAM Access Analyzer generates comprehensive findings that identify resources that can be accessed from outside an AWS account. AWS IAM Access Analyzer does this by evaluating resource policies using mathematical logic and inference to determine the possible access paths allowed by the policies. AWS IAM Access Analyzer continuously monitors for new or updated policies, and it analyzes permissions granted using policies for their Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions.”

If you’ve worked with IAM, you know that without deliberate design and planning, it can become an unwieldy mess quickly. Disorganization with your IAM policies means you run the risk of creating inadvertent security holes in your infrastructure, which might not be immediately apparent. This new feature to AWS Security Hub streamlines the process for surfacing those latent IAM issues that may have otherwise gone unnoticed.

3b) Amazon Detective

“Amazon Detective is a new service in Preview that makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations.”

The result of Amazon’s acquisition of Sqrrl in 2018, Amazon Detective is another handy tool that helps separate the signal from the noise in the cacophony of cloud event data generated across accounts. What’s different about this service as compared to others like GuardDuty is that it builds relationship graphs which can be used to rapidly identify links (edges) between events (nodes). This is a powerful capability to have when investigating security events and the possible impact across your Cloud portfolio.

#2 EC2 Image Builder

“EC2 Image Builder is a service that makes it easier and faster to build and maintain secure images. Image Builder simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images.”

2nd Watch clients have needed an automated solution to “bake” consistent machine images for years, and our “Machine Image Factory” solution accelerator was developed to efficiently address the need using tools such as Hashicorp Packer, AWS CodeBuild, and AWS CodePipeline.

The reason this solution has been so popular is that by having your own library of images customized to your organizations requirements (eg, security configurations, operations tooling, patching), you can release applications faster, with greater consistency, and without burdening your teams’ time or focus watching installation progress bars when they can be working on higher business value activities.

What’s great about AWS releasing this capability as a native service offering is that it is making a best-practice pattern even more accessible to organizations without confusing the business outcome with an array of underlying tools being brought together to make it happen. If your team wants to get started with EC2 Image Builder but you need help with understanding how to get from your current “hand crafted” images to Image Builder’s recipes and tests, we can help!

#1 Outposts

“AWS Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS Outposts is ideal for workloads that need low latency access to on-premises applications or systems, local data processing, and to securely store sensitive customer data that needs to remain anywhere there is no AWS region, including inside company-controlled environments or countries.”

It’s 2019, and plants are now meat and AWS is hardware you can install in your datacenter. I will leave it to you to guess which topic has been more hotly debated on the 2nd Watch Slack, but amongst our clients, Outposts has made its way into many conversations since its announcement at re:Invent 2018. Coming out of last week’s announcement of Outposts GA, I think we will be seeing a lot more of this service in 2020.

One of the reasons I hear clients inquiring about Outposts is that it fills a gap for workloads with proximity or latency requirements to manufacturing plants or another type of strategic regional facility. This “hyper-local” need echoes the announcement for AWS Local Zones, which presents a footprint for AWS cloud resources targeting a specific geography (Los Angeles, CA initially).

Of course, regional datacenters and other hyperconverged platforms exist to run these types of workloads already, but what is so powerful about Outposts is that it brings the Cloud operations model back to the datacenter and the same cloud skills that your teams have developed and hired for don’t need to be stunted to learn a disparate set of skills on a niche hardware vendor platform that could be irrelevant 3 years from now.

I’m excited to see how these picks and all of the new services announced play out over the next year. There is a lot here for businesses to implement in their environments to drive down costs, improve visibility and security, and dial in performance for their differentiating workloads.

Head over to our Twitter account, @2ndWatch, if you think there should be others included in our top 5 list. We’d love to get your take!

-Joe Conlin, Solutions Architect

rss
Facebooktwitterlinkedinmail

Using AWS IAM with STS for access to AWS Resources

With increased focus on security and governance in today’s digital economy, I want to highlight a simple but important use case that demonstrates how to use AWS Identity and Access Management (IAM) with Security Token Service (STS) to give trusted AWS accounts access to resources that you control and manage.

What is STS (Security Token Service)?

Security Token Service is an extension of IAM and is one of several web services offered by AWS that does not incur any costs to use.  But, unlike IAM, there is no user interface on the AWS console to manage and interact with STS. Rather all interaction is done entirely through one of several extensive SDKs or directly using common HTTP protocol.  I will be using Terraform to create some simple resources in my sandbox account and .NET Core SDK to demonstrate how to interact with STS.

The main purpose and function of STS is to issue temporary security credentials for AWS resources to trusted and authenticated entities.  These credentials operate identically to the long-term keys that typical IAM users have, with a couple of special characteristics:

  • They automatically expire and become unusable after a short and defined period of time elapses
  • They are issued dynamically

These characteristics offer several advantages in terms of application security and development and are useful for cross-account delegation and access.  STS solves two problems for owners of AWS resources:

  • Meets the IAM best-practices requirement to regularly rotate access keys
  • You do not need to distribute access keys to external entities or store them within an application

One common scenario where STS is useful involves sharing resources between AWS accounts.  Let’s say, for example, that your organization captures and processes data in S3, and one of your clients would like to push large amounts of data from resources in their AWS account to an S3 bucket in your account in an automated and secure fashion.

Process Automation Challenges

While you could create an IAM user for your client, your corporate data policy requires that you rotate access keys on a regular basis, and this introduces challenges for automated processes.  Additionally, you would like to limit the distribution of secret access keys to your resources to external entities.  Let’s use STS to solve this!

To get started, let’s create some resources in your AWS cloud.  Do you even Terraform, bro?

Let’s create a new S3 bucket and set the bucket ACL to be private, meaning nobody but the bucket owner (that’s you!) has access.  Remember that bucket names must be unique across all existing buckets, and they should comply with DNS naming conventions.  Here is the Terraform HCL syntax to do this:

Using AWS IAM with STS for access to AWS Resources

Great! We now have a bucket… but for now, only the owner can access it.  This is a good start from a security perspective (i.e. “least permissive” access).

What an empty bucket may look like

Let’s create an IAM role that, once assumed, will allow IAM users with access to this role to have permissions to put objects into our bucket.  Roles are a secure way to grant trusted entities access to your resources.

You can think about roles in terms of a jacket that an IAM user can wear for a short period of time, and while wearing this jacket, the user has privileges that they wouldn’t normally have when they aren’t wearing it.  Kind of like a bright yellow Event Staff windbreaker!

For this role, we will specify that users from our client’s AWS account are the only ones that can wear the jacket. This is done by including the client’s AWS account ID in the Principal statement. AWS Account IDs are not considered to be secret, so your client can share this with you without compromising their security.  If you don’t have a client but still want to try this stuff out, put your own AWS account ID here instead.

AWS account ID in Code
Great, now we have a role that our trusted client can wear.  But, right now our client can’t do anything except wear the jacket.  Let’s give the jacket some special powers, such that anyone wearing it can put objects into our S3 bucket.

We will do this by creating a security policy for this role.  This policy will specify what exactly can be done to S3 buckets that it is attached to. Then we will attach it to the bucket we want our client to use.  Here is the Terraform syntax to accomplish this:
Terraform syntax

A couple things to note about this snippet – First, we are using Terraform interpolation to inject values from previous terraform statements into a couple of places in the policy – specifically the ARN from the role and bucket we created previously. Second, we are specifying a condition for the s3 policy – one that requires a specific object ACL for the action s3:PutObject, which is accomplished by including the HTTP request header x-amz-acl to have a value of bucket-owner-full-control with the PUT object request.

By default, objects PUT in S3 are owned by the account that created them, even if it is stored in someone else’s bucket.  For our scenario, this condition will require your client to explicitly grant ownership of objects placed in your bucket to you, otherwise the PUT request will fail.

So, now we have a bucket, a policy in place on our bucket, and a role that assumes that policy.  Now your client needs to get to work writing some code that will allow them to assume the role (wear the jacket) and start putting objects into your bucket.  Your client will need to know a couple of things from you before they get started:

  1. The bucket name and the region it was created in (the example above created a bucket named d4h2123b9-xaccount-bucket in us-west-2)
  2. The ARN for the role (Terraform can output this for you). It will look something like this but will have your actual AWS Account ID: arn:aws:iam::123456789012:role/sts-delegate-role

They will also need to create an IAM User in their account and attach a policy allowing the user to assume roles via STS.  The policy will look similar to this:

STS Policy Example
Let’s help your client out a bit and provide some C# code snippets for .NET Core 2.0 (available for Windows, macOS and LinuxTo get started, install the .NET SDK for your OS, then fire up a command prompt in a favorite directory and run these commands:
Command Prompt Example
The first command will create a new console app in the subdirectory s3cli.  Then switch context to that directory and import the AWS SDK for .NET Core, and then add packages for SecurityToken and S3 services.
Once you have the libraries in place, fire up your favorite IDE or text editor (I use Visual Studio Code), then open Program.cs and add some code:
Program.cs code
This snippet sends a request to STS for temporary credentials using the specified ARN.  Note that the client must provide IAM user credentials to call STS, and that IAM user must have a policy applied that allows it to assume a role from STS.

This next snippet takes the STS credentials, bucket name, and region name, and then uploads the Program.cs file that you’re editing and assigns it a random key/name.  Also note that it explicitly applies the Canned ACL that is required by the sts-delegate-role:
Snippet of STS credentials, bucket name, and region name
So, to put this all together, run this code block and make the magic happen!  Of course, you will have to define and provide proper variable values for your environment, including  securely storing your credentials.
securely storing your credentials example
Try it out from the command prompt:
Command Prompt
If all goes well, you will have a copy of Program.cs in the bucket. Not very useful itself, but it illustrates how to accomplish the task.
copy of Program.cs in the bucket

What a bucket with something in it may look like

Here is a high-level document of what we put together:

AWS STS Bucket Example

Putting it all together

Steps:

  1. Your client uses their IAM user to call AWS STS and requests the role ARN you gave them
  2. STS authenticates the client’s IAM user and verifies the policy for the ARN role, then issues a temporary credential to the client.
  3. The client can use the temporary credentials to access your S3 bucket (they will expire soon), and since they are now wearing the Event Staff jacket, they can successfully PUT stuff in your bucket!

There are many other use-cases for STS and assume STS. This is just one very simplistic example. However, with this brief introduction to the concepts, you should now have a decent idea of how STS works with IAM roles and policies, and how you can use STS to give access to your AWS resources for trusted entities. For more tips like this, contact us.

-Jonathan Eropkin, Cloud Consultant

rss
Facebooktwitterlinkedinmail

Optimizing your AWS environment using Trusted Advisor (Part 2)

AWS provides an oft overlooked tool available to accounts with “Business” or “Enterprise” level support called Trusted Advisor (TA). Trusted Advisor is a tool that analyzes your current AWS resources for ways to improve your environment in the following categories:

  • Cost Optimization
  • Security
  • Performance
  • Fault Tolerance

It rigorously scours your AWS resources for inefficiencies, waste, potential capacity issues, best practices, security holes and much, much more. It provides a very straightforward and easy to use interface for viewing the identified issues.

Trusted Advisor will do everything from detecting EC2 instances that are under-utilized (e.g. using an m3.xlarge for a low traffic NAT instance), to detecting S3 buckets that are good candidates for fronting with a CloudFront distribution, to identifying Security Groups with wide open access to a port(s), and everything in between.

In Amazon’s own words…

AWS Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance and reliability, or closing security gaps. Since 2013, customers have viewed over 1.7 million best-practice recommendations in AWS Trusted Advisor in the categories of cost optimization, performance improvement, security, and fault tolerance, and they have realized over $300 million in estimated cost reductions. Currently, Trusted Advisor provides 37 checks; the most popular ones are Low Utilization Amazon.

This week (7/23/2014) AWS just announced the release of the new Trusted Advisor Console.

Two new features of the TA console I found particularly noteworthy and useful are the Action Links and Access Management.

Action Links allow you to click a hyperlink next to an issue in the TA Console that redirects you to the appropriate place to take action on the issue. Pretty slick… saves you time jumping around tabs in your browser or navigate to the correct Console and menus. Action Links will also take the guess work out of hunting down the correct place if you aren’t that familiar with the AWS Console.

Access Management allows you to use AWS IAM (Identity and Access Management) credentials to control access to specific categories and checks within Trusted Advisor. This gives you the ability to have granular access control over which people in your organization can view and act on specific checks.

In addition to the console, Trusted Advisor also supports API access. And this wouldn’t be my AWS blog post without some kind of coding example using Python and the boto library. The following example code will print out a nicely formatted list of all the Trusted Advisory categories and each of the checks underneath them in alphabetical order.

#!/usr/bin/python
from boto import connect_support
conn = connect_support()
ta_checks = sorted(conn.describe_trusted_advisor_checks('en')['checks'],
                   key=lambda check: check['category'])
for cat in sorted(set([ x['category'] for x in ta_checks ])):
    print "\n%s\n%s" % (cat, '-' * len(cat))
    for check in sorted(ta_checks, key=lambda check: check['name']):
        if check['category'] == cat:
            print "  %s" % check['name']

Here is the resulting output (notice all 37 checks are accounted for):

cost_optimizing
---------------
Amazon EC2 Reserved Instances Optimization
Amazon RDS Idle DB Instances
Amazon Route 53 Latency Resource Record Sets
Idle Load Balancers
Low Utilization Amazon EC2 Instances
Unassociated Elastic IP Addresses
Underutilized Amazon EBS Volumes

fault_tolerance
---------------
Amazon EBS Snapshots
Amazon EC2 Availability Zone Balance
Amazon RDS Backups
Amazon RDS Multi-AZ
Amazon Route 53 Deleted Health Checks
Amazon Route 53 Failover Resource Record Sets
Amazon Route 53 High TTL Resource Record Sets
Amazon Route 53 Name Server Delegations
Amazon S3 Bucket Logging
Auto Scaling Group Health Check
Auto Scaling Group Resources
Load Balancer Optimization
VPN Tunnel Redundancy

performance
-----------
Amazon EBS Provisioned IOPS (SSD) Volume Attachment Configuration
Amazon Route 53 Alias Resource Record Sets
CloudFront Content Delivery Optimization
High Utilization Amazon EC2 Instances
Large Number of EC2 Security Group Rules Applied to an Instance
Large Number of Rules in an EC2 Security Group
Overutilized Amazon EBS Magnetic Volumes
Service Limits

security
--------
AWS CloudTrail Logging
Amazon RDS Security Group Access Risk
Amazon Route 53 MX and SPF Resource Record Sets
Amazon S3 Bucket Permissions
IAM Password Policy
IAM Use
MFA on Root Account
Security Groups - Specific Ports Unrestricted
Security Groups - Unrestricted Access

In addition to the meta-data about categories and checks, actual TA check results and recommendations can also be pulled and refreshed using the API.

While Trusted Advisor is a great tool to quickly scan your AWS environment for inefficiencies, waste, potential cost savings, basic security issues, and best practices, it isn’t a “silver bullet” solution. It takes a specific set of AWS architectural understanding, skills, and experience to look at an entire application stack or ecosystem and ensure it is properly designed, built, and/or tuned to best utilize AWS and its array of complex and powerful building blocks. This where a company like 2nd Watch can add immense value in a providing a true “top down” cloud optimization. Our architects and engineers are the best in the business at ensuring applications and infrastructure are designed and implemented using AWS and cloud computing best practices with a fierce attention to detail and focus on our customers’ success in their business and cloud initiatives.

-Ryan Kennedy, Senior Cloud Architect

rss
Facebooktwitterlinkedinmail

AWS Identity and Access Management (IAM)

Dealing with organizational change is a challenge in today’s fast-paced business environment.  Long gone are the days when employees stayed with companies until retirement.  The mindset of many employees is to move around to different companies for a promotion, a better salary, or new challenging opportunities.

Managing organizational change in terms of user access is becoming more and more complex due to the changing technology landscape.  With systems being accessible over the network, IT shops can’t just deny ex-employees physical access to the building, but have to cut their credentials to the network as well. With the proliferation of cloud technologies this can become even more of a challenge because your digital assets are accessible over the internet from anywhere in the world. In many technology centric companies managing login credentials and access are paramount for securing the assets of the business and coping with organizational change.

IAM Permissions

mTo solve this problem AWS has a service called Identity and Access Management (IAM).  IAM is an Amazon Web Services feature that allows you to regulate use and access to AWS resources.  With IAM you can create and manage users and groups for access to your AWS environment.  IAM also gives you the ability to assign permissions to the users and groups to allow or deny access.

With IAM you can assign users access keys, passwords and even Multi Factor Authentication devices to access your AWS environment.  IAM on AWS even allows you to manage access with federated users, a way to configure access using credentials that expire and are manageable through traditional corporate directories like Microsoft Active Directory.

With IAM you can set permissions based on AWS provided policy templates like “Administrator Access” which allows full access to all AWS resources and services, “Power User Access” which provides full access to all AWS resources and services but does not allow access to managing users and groups, or even “Read Only Access”.  These policies can be applied to users and groups.  Some policy templates provided can limit users to use certain services like the policy template, “Amazon EC2 Full Access” or “Amazon EC2 Read Only Access”, which gives a user full access to EC2 via the AWS management console and read only access to EC2 via the AWS management console respectively.

User Permissions

IAM Users

IAM also allows you to set your own policies to manage permissions.  Say you wanted an employee to be able to just start and stop instances you can use the IAM Policy Generator to create a custom policy to do just that.  You would select the effect, Allow or Deny, the specific service, and the action.  IAM also gives you the ability to layer the permissions on top of each other by adding additional statements to the policy.

Edit Permissions

Once you create a policy you can apply it to any user or group and it automatically takes effect.  When something changes in the organization, like an employee leaving, AWS IAM simplifies management of access and identity by allowing you to just delete the user or policy attached to that user. If an employee moves from one group to another it is easy to reassign the user to a different group with the appropriate access level.  As you can see the variety of policy rules is extensive, allowing you to create very fine grained permissions around your AWS resources and services.

Another great thing about IAM is that it’s a free service that comes with every AWS account, it is surprising to see how many people overlook this powerful tool.  It is highly recommended to always use IAM with any AWS account.  It gives you the ability to have an organized way to manage users and access to your AWS account and simplifies the management nightmare of maintaining access controls as the environment grows.

-Derek Baltazar

Senior Cloud Engineer

rss
Facebooktwitterlinkedinmail