1-888-317-7920 info@2ndwatch.com

How to use waiters in boto3 (And how to write your own!)

What is boto3?

Boto3 is the python SDK for interacting with the AWS api. Boto3 makes it easy to use the python programming language to manipulate AWS resources and automation infrastructure.

What are boto3 waiters and how do I use them?

A number of requests in AWS using boto3 are not instant. Common examples of boto3 requests are deploying a new server or RDS instance. For some long running requests, we are ok to initiate the request and then check for completion at some later time. But in many cases, we want to wait for the request to complete before we move on to the subsequent parts of the script that may rely on a long running process to have been completed. One example would be a script that might copy an AMI to another account by sharing all the snapshots. After sharing the snapshots to the other account, you would need to wait for the local snapshot copies to complete before registering the AMI in the receiving account. Luckily a snapshot completed waiter already exists, and here’s what that waiter would look like in Python:

As far as the default configuration for the waiters and how long they wait, you can view the information in the boto3 docs on waiters, but it’s 600 seconds in most cases. Each one is configurable to be as short or long as you’d like.

Writing your own custom waiters.

As you can see, using boto3 waiters is an easy way to setup a loop that will wait for completion without having to write the code yourself. But how do you find out if a specific waiter exists? The easiest way is to explore the particular boto3 client on the docs page and check out the list of waiters at the bottom. Let’s walk through the anatomy of a boto3 waiter. The waiter is actually instantiated in botocore and then abstracted to boto3. So looking at the code there we can derive what’s needed to generate our own waiter:

  1. Waiter Name
  2. Waiter Config
    1. Delay
    2. Max Attempts
    3. Operation
    4. Acceptors
  3. Waiter Model

The first step is to name your custom waiter. You’ll want it to be something descriptive and, in our example, it will be “CertificateIssued”. This will be a waiter that waits for an ACM certificate to be issued (Note there is already a CertificateValidated waiter, but this is only to showcase the creation of the waiter). Next we pick out the configuration for the waiter which boils down to 4 parts. Delay is the amount of time it will take between tests in seconds. Max Attempts is how many attempts it will try before it fails. Operation is the boto3 client operation that you’re using to get the result your testing. In our example, we’re calling “DescribeCertificate”. Acceptors is how we test the result of the Operation call. Acceptors are probably the most complicated portion of the configuration. They determine how to match the response and what result to return. Acceptors have 4 parts: Matcher, Expected, Argument, State.

  • State: This is what the acceptor will return based on the result of the matcher function.
  • Expected: This is the expected response that you want from the matcher to return this result.
  • Argument: This is the argument sent to the matcher function to determine if the result is expected.
  • Matcher: Matchers come in 5 flavors. Path, PathAll, PathAny, Status, and Error. The Status and Error matchers will effectively check the status of the HTTP response and check for an error respectively. They return failure states and short circuit the waiter so you don’t have to wait until the end of the time period when the command has already failed. The Path matcher will match the Argument to a single expected result. In our example, if you run DescribeCertificate you would get back a “Certificate.Status” as a result. Taking that as an argument, the desired expected result would be “ISSUED”. Notice that if the expected result is “PENDING_VALIDATION” we set the state to “retry” so it will continue to keep trying for the result we want. The PathAny/PathAll matchers work with operations that return a python list result. PathAny will match if any item in the list matches, PathAll will match if all the items in the list match.

Once we have the configuration complete, we feed this into the Waiter Model call the “create_waiter_with_client” request. Now our custom waiter is ready to wait. If you need more examples of waiters and how they are configured, check out the botocore github and poke through the various services. If they have waiters configured, they will be in a file called waiters-2.json. Here’s the finished code for our customer waiter.

And that’s it. Custom waiters will allow you to create automation in a series without having to build redundant code of complicated loops. Have questions about writing custom waiters or boto3? Contact us

-Coin Graham, Principal Cloud Consultant

Facebooktwitterlinkedinmailrss

The Cloudcast Podcast with Jeff Aden, Co-Founder and EVP at 2nd Watch

The Cloudcast’s Aaron and Brian talk with Jeff Aden, Co-Founder and EVP at 2nd Watch, about the evolution of 2nd Watch as a Cloud Integrator as AWS has grown and shifted its focus from startups to enterprise customers. Listen to the podcast at http://www.thecloudcast.net/2019/02/evolution-of-public-cloud-integrator.html.

Topic 1 – Welcome to the show Jeff. Tell us about your background, the founding of 2nd Watch, and how the company has evolved over the last few years.

Topic 2 – We got to know 2nd Watch at one of the first AWS re:Invent shows, as they had one of the largest booths on the floor. At the time, they were listed as one of AWS’s best partners. Today, 2nd Watch provides management tools, migration tools, and systems-integration capabilities. How does 2nd Watch think of themselves?

Topic 3 –  What are the concerns of your customers today, and how does 2nd Watch think about matching customer demands and the types of tools/services/capabilities that you provide today?

Topic 4 – We’d like to pick your brain about the usage and insights you’re seeing from your customers’ usage of AWS. It’s mentioned that 100% are using DynamoDB, 53% are using Elastic Kubernetes, and a fast growing section is using things likes Athena, Glue and Sagemaker. What are some of the types of applications that you’re seeing customer build that leverage these new models? 

Topic 5 – With technologies like Outpost being announced, after so many years of AWS saying “Cloud or legacy Data Center,” how do you see this impacting the thought process of customers or potential customers?

Facebooktwitterlinkedinmailrss

Leveraging the cloud for SOC 2 compliance

In a world of high profile attacks, breaches, and information compromises, companies that rely on third parties to manage and/or store their data sets are wise to consider a roadmap for their security, risk and compliance strategy. Failure to detect or mitigate the loss of data or other security breaches, including breaches of their suppliers’ information systems, could seriously expose a cloud user and their customers to a loss or misuse of information in such a harmful way that it becomes difficult to recover from. In 2018 alone, there were nearly 500 million records exposed from data breaches, according to the Identity Theft Resource Center’s findings, https://www.idtheftcenter.org/2018-end-of-year-data-breach-report/. While absolute security can never be attained while running your business, there are frameworks, tools, and strategies that can be applied to minimize the risks to acceptable levels while maintaining continuous compliance.

SOC 2 is one of those frameworks that is particularly beneficial in the Managed Service Providers space. It is a framework that is built on the AICPA’s Trust Services Principles (TSP) for service security, availability, confidentiality, processing integrity, and privacy.  SOC 2 is well suited for a wide range of applications, especially in the cloud services space. Companies have realized that their security and compliance frameworks must stay aligned with the inherent changes that come along with cloud evolution. This includes making sure to stay abreast of developing capabilities and feature enhancements.  For example, AWS announced a flurry of new services and features at its annual re:Invent conference in 2018 alone. When embedded into their cloud strategy, companies can use the common controls that SOC 2 offers to build the foundation for a robust Information Systems security program.

CISO’s, CSO’s, and company stakeholders must not take on the process of forming the company security strategy in a vacuum. Taking advantage of core leaders in the organization, both at the management level and at the individual contributor level, should be part of the overall security development strategy, just as it is with successful innovation strategies. In fact, the security strategy should be integrated within the company innovation strategy. One of the best approaches to ensure this happens, for example, is to develop a steering committee with participation from all major divisions and/or groups. This is more effective with smaller organizations where information can quickly flow vertically and horizontally, however, larger organizations would simply need to ensure that the vehicles are in place to allow for a quick flow of information to all stakeholders

Organizations with strong security programs have good controls in place to address each of the major domain categories under the Trust Service Principles. Each of the Trust Service Principles can be described through the controls that the company has established. Below are some ways that Managed Cloud Service providers like 2nd Watch meet the requirements for security, availability, and confidentiality while simultaneously lowering the overall risk to their business and their customers business:

Security

  • Change Management – Implement both internal and external system change management using effective ITSM tools to track, at a minimum, the change subject, descriptions, requester, urgency, change agent, service impact, change steps, evidence of testing, back-out plan, and appropriate stakeholder approvals.
  • End-User Security – Implement full-disk encryption for end-user devices, deploy centrally managed Directory Services for authorization, use multi-factor authentication, follow password/key management best-practices, use role based access controls, segregate permission using a least-user-privilege approach, and document the policies and procedures. These are all great ways towards securing environments fairly quickly.
  • Facilities – While “security of the cloud” environment fall into the responsibility of your cloud infrastructure provider, your Managed Services Provider should work to adequately protect their own, albeit not in scope, physical spaces. Door access badges, logs, and monitoring of entry/exit points are positive ways to prevent unauthorized physical entry.
  • AV Scans – Ensure that your cloud environments are built with AV scanning solutions.
  • Vulnerability Scans and Remediation – Ensure that your Managed Services Provider or third party provider is running regular vulnerability scans and performing prompt risk remediation. Independent testing of the provider’s environment will help to identify any unexpected risks so implementing an annual penetration test is important.

Availability

  • DR and Incident Escalations – Ensure that your MSP provider maintains current documented disaster recovery plans with at least annual exercises. Well thought-out plans include testing of upstream and downstream elements of the supply chain, including a plan for notifications to all stakeholders.
  • Risk Mitigation – Implement an annual formal risk assessment with a risk mitigation plan for the most likely situations.

Confidentiality

  • DLP – Implement ways and techniques to prevent data from being lost by unsuspecting employees or customers. Examples may include limiting use of external media ports to authorized devices, deprecating old cypher protocols, and blocking unsafe or malicious downloads.
  • HTTPS – Use secure protocols and connections for the safe transmission of confidential information.
  • Classification of Data – Make sure to identify elements of your cloud environment so that your Managed Service Providers or 3rd Parties can properly secure and protect those elements with a tagging strategy.
  • Emails – Use email encryption when sending any confidential information. Also, check with your own Legal department for proper use of your Confidentiality Statement at end of emails that are appropriate to your business.

By implementing these SOC 2 controls, companies can be expected to have a solid security framework to build on. Regardless of their stage in the cloud adoption lifecycle, businesses must continue to demonstrate to their stakeholders (customers, board members, employees, shareholders) that they have a secure and compliant business. As with any successful customer-service provider relationship, the use of properly formed contracts and agreements comes into play. Without these elements in place and in constant use, it is difficult to evaluate how well a company is measuring up. This is where controls and a framework on compliance like SOC 2 plays a critical role.

Have questions on becoming SOC 2 compliant? Contact us!

– By Eddie Borjas, Director of Risk & Compliance

Facebooktwitterlinkedinmailrss

Operating and maintaining systems at scale with automation

Managing numerous customers with unique characteristics and tens of thousands of systems at scale can be challenging. Here, I want to pull back the curtain on some of the automation and tools that 2nd Watch develops to solve these problems. Below I will outline our approach to this problem and its 3 main components: Collect, Model, and React.

Collect: The first problem facing us is an overwhelming flood of data. We have CloudWatch metrics, CloudTrail events, custom monitoring information, service requests, incidents, tags, users, accounts, subscriptions, alerts, etc. The data is all structured differently, tells us different stories, and is collected at an unrelenting pace. We need to identify all the sources, collect the data, and store it in a central place so we can begin to consume it and make correlations between various events.

Most of the data I described above can be gathered from the AWS & Azure APIs directly, while others may need to be ingested with an agent or by custom scripts. We also need to make sure we have a consistent core set of data being brought in for each of our customers, while also expanding that to include some specialized data that perhaps only certain customers may have. All the data is gathered and sent to our Splunk indexers. We build an index for every customer to ensure that data stays segregated and secure.

Model: Next we need to present the data in a useful way. The modeling of the data can vary depending on who is using it or how it is going to be consumed. A dashboard with a quick look at several important metrics can be very useful to an engineer to see the big picture. Seeing this data daily or throughout the day will make anomalies very apparent. This is especially helpful because gathering and organizing the data at scale can be time consuming, and thus could reasonably only be done during periodic audits.

Modeling the data in Splunk allows for a low overhead view with up-to-date data so the engineer can do more important things. A great example of this is provisioned resources by region. If the engineer looks at the data on a regular basis, they would quickly notice that the number of provisioned resources has drastically changed. A 20% increase in the number of EC2 resources could mean several things; Perhaps the customer is doing a large deployment, or maybe Justin accidently put his AWS access key and secret key on GitHub (again).

We provide our customers with regular reports and reviews of their cloud environments. We also use the data collected and modeled in this tool for providing that data. Historical data trended over a month, quarter, and year can help you ask questions or tell a story. It can help you forecast your business, or the number of engineers needed to support it. We recently used the historical tending data to show progress of a large project that included waste removal and a resource tagging overhaul for a customer. Not only were we able to show progress throughout the project,t but we used that same view to ensure that waste did not creep back up and that the new tagging standards were being applied going forward.

React: Finally, it’s time to act on the data we collected and modeled. Using Splunk alerts we can provide conditional logic to the data patterns and act upon them. From Splunk we can call our ticketing system’s API and create a new incident for an engineer to investigate concerning trends or to notify the customer of a potential security risk. We can also call our own APIs that trigger remediation workflows. A few common scenarios are encrypting unencrypted S3 buckets, deleting old snapshots, restarting failed backup jobs, requesting cloud provider limit increases, etc.

Because we have several independent data sources providing information, we can also correlate events and have more advanced conditional logic. If we see that a server is failing status checks, we can also look to see if it recently changed instance families or if it has all the appropriate drivers. This data can be included in the incident and available for the engineer to review without having to check it themselves.

The entire premise of this idea and the solution it outlines is about efficiency and using data and automation to make quicker and smarter decisions. Operating and maintaining systems at scale brings forth numerous challenges and if you are unable to efficiently accommodate the vast amount of information coming at you, you will spend a lot of energy just trying to keep your head above water.

For help getting started in automating your systems, contact us.

-Kenneth Weinreich, Managed Cloud Operations

Facebooktwitterlinkedinmailrss

Continuous Compliance: Continuous Iteration

For most students, one of the most stressful experiences of their educational career are exam days.  Exams are a semi-public declaration of your ability to learn, absorb, and regurgitate the curriculum, and while the rewards for passing are rather mundane, the ramifications of failure are tremendous.  My anecdotal educational experience indicates that exam success is primarily due to preparation, with a fair bit of luck thrown in.  If you were like me in school, my exam preparation plan consisted mostly of cramming, with a heavy reliance on luck that the hours spent jamming material into my brain would cover at least 70% of the exam contents.

After I left my education career behind me and started down a new path in business technology, I was rather dismayed to find out that the anxiety of testing and exams continued, but in the form of audits!  So much for the “we will never use this stuff in real life” refrain that we students expressed Calculus 3 class – exams and tests continue even when you’re all grown up.  Oddly enough, the recipe for audit success was remarkably similar: a heavy dose of preparation with a fair bit of luck thrown in.  Additionally, it seemed that many businesses also adhered to my cram-for-the-exam pattern.  Despite full knowledge and disclosure of the due dates and subject material, audit preparation largely consisted of ignoring it until the last minute, followed by a flurry of activity, stress, anxiety, and panic, with a fair bit of hoping and wishing-upon-a-star that the auditors won’t dig too deeply. There must be a better way to be prepared and execute (hint: there is)!

There are some key differences between school exams and business audits:

  • Audits are open-book: the subject matter details and success criteria are well-defined and well-known to everyone
  • Audits have subject matter and success criteria that remains largely unchanged from one audit to the next

Given these differences, it would seem logical that preparation for audits should be easy. We know exactly what the audit will cover, we know when it will happen, and we know what is required to pass.  If only it was that easy.  Why, then, do we still cram-for-the-exam and wait to the last minute?  I think it comes down to these things:

  • Audits are important, just like everything else
  • The scope of the material seems too large
  • Our business memory is short

Let’s look at that last one first.  Audits tend to be infrequent, often with months or years going by before they come around again.  Like exam cramming, it seems that our main goal is to get over the finish line.  Once we are over that finish line, we tend to forget all about what we learned and did, and our focus turns to other things.  Additionally, the last-minute cram seems to be the only way to deal with the task at hand, given the first two points above.  Just get it done, and hope.

What if our annual audits were more frequent, like once a week?  The method of cramming is not sustainable or realistic.  How could we possibly achieve this?

Iteration.

Iteration is, by definition, a repetitive process that intends to produce a series of outcomes.  Both simple and complex problems can often be attacked and solved by iteration:

  • Painting a dark-colored room in a lighter color
  • Digging a hole with a shovel
  • Building a suspension bridge
  • Attempting to crack an encrypted string
  • Achieving a defined compliance level in complex IT systems

Note that last one: achieving audit compliance within your IT ecosystem can be an iterative process, and it doesn’t have to be compressed into the 5 days before the audit is due.

The iteration (repetitive process) is simple:

The scope and execution of the iteration is where things tend to break down.  The key to successful iterations starts with defining and setting realistic goals. When in doubt, keep the goals small!  The idea here is being able to achieve the goal repeatedly and quickly, with the ability to refine the process to improve the results.

Define

We need to clearly define what we are trying to achieve.  Start big-picture and then drill down into something much smaller and achievable.  This will accomplish two things: 1) build some confidence that we can do this, and 2) using what we will do here, we can “drill up” and tackle a similar problem using the same pattern.   Here is a basic example of starting big-picture and drilling down to an achievable goal:

Identify and Recognize

Given that we are going to monitor failed user logons, we need a way to do this.  There are manual ways to achieve this, but, given that we will be doing this over and over, it’s obvious that this needs to be automated.  Here is where tooling comes into play.  Spend some time identifying tools that can help with log aggregation and management, and then find a way to automate the monitoring of failed network user authentication logs.

Notify and Remediate

Now that we have an automated way to aggregate and manage failed network user authentication logs, we need to look at our (small and manageable) defined goal and perform the necessary notifications and remediations to meet the requirement.  Again, this will need to be repeated over and over, so spend some time identifying automated tools that can help with this process.

Analyze and Report

Now that we are meeting the notification and remediation requirements in a repeatable and automated fashion, we need to analyze and report on the effectiveness of our remedy and, based on the analysis, make necessary improvements to the process, and then repeat!

Now that we have one iterative and automated process in place that meets and remedies an audit requirement, there is one less thing that needs to be addressed and handled when the audit comes around.  We know that this one requirement is satisfied, and we have the process, analysis, and reports to prove it.  No more cramming for this particular compliance requirement, we are now handling it continuously.

Now, what about the other 1,000 audit requirements?   As the saying goes, “How do you eat an elephant (or a Buick)?  One bite at a time.”  You need the courage to start, and from there every bite gets you one step closer to the goal.

Keys to achieving Continuous Compliance include:

  • You must start somewhere. Pick something!
  • Start big-picture, then drill down to something small and achievable.
  • Automation is a must!

For help getting started on the road to continuous compliance, contact us.

-Jonathan Eropkin, Cloud Consultant

Facebooktwitterlinkedinmailrss

Using Docker Containers to Move Your Internal IT Orgs Forward

Many people are looking to take advantage of containers to isolate their workloads on a single system. Unlike traditional hypervisor-based virtualization, which utilizes the same operating system and packages, Containers allow you to segment off multiple applications with their own set of processes on the same instance.

Let’s walk through some grievances that many of us have faced at one time or another in our IT organizations:

Say, for example, your development team is setting up a web application. They want to set up a traditional 3 tier system with an app, database, and web servers. They notice there is a lot of support in the open source community for their app when it is run on Ubuntu Trusty (Ubuntu 14.04 LTS) and later. They’ve developed the app in their local sandbox with an Ubuntu image they downloaded, however, their company is a RedHat shop.

Now, depending on the type of environment you’re in, chances are you’ll have to wait for the admins to provision an environment for you. This often entails (but is not limited to) spinning up an instance, reviewing the most stable version of the OS, creating a new hardened AMI, adding it to Packer, figuring out which configs to manage, and refactoring provisioning scripts to utilize aptitude and Ubuntu’s directory structure (e.g Debian has over 50K packages to choose from and manage). In addition to that, the most stable version of Ubuntu is missing some newer packages that you’ve tested in your sandbox that need to be pulled from source or another repository. At this point, the developers are procuring configuration runbooks to support the app while the admin gets up to speed with the OS (not significant but time-consuming nonetheless).

You can see my point here. A significant amount of overhead has been introduced, and it’s stagnating development. And think about the poor sysadmins. They have other environments that they need to secure, networking spaces to manage, operations to improve, and existing production stacks they have to monitor and support while getting bogged down supporting this app that is still in the very early stages of development. This could mean that mission-critical apps are potentially losing visibility and application modernization is stagnating. Nobody wins in this scenario.

Now let us revisit the same scenario with containers:

I was able to run my Jenkins build server and an NGINX web proxy, both running on a hardened CentOS7 AMI provided by the Systems Engineers with docker installed.  From there I executed a docker pull  command pointed at our local repository and deployed two docker images with Debian as the underlying OS.

$ docker pull my.docker-repo.com:4443/jenkins
$ docker pull my.docker-repo.com:4443/nginx

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

$ docker ps

7478020aef37 my.docker-repo.com:4443/jenkins/jenkins:lts   “/sbin/tini — /us …”  16 minutes ago   Up 16 minutes ago  8080/tcp, 0.0.0.0:80->80/tcp, 50000/tcp jenkins

d68e3b96071e my.docker-repo.com:4443/nginx/nginx:lts “nginx -g ‘daemon of…” 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

$ sudo systemctl status jenkins-docker

jenkins-docker.service – Jenkins
Loaded: loaded (/etc/systemd/system/jenkins-docker.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-11-08 17:38:06 UTC; 18min ago
Process: 2006 ExecStop=/usr/local/bin/jenkins-docker stop (code=exited, status=0/SUCCESS)

The processes above were executed on the actual instance. Note how I’m able to execute a cat of the OS release file from within the container

sudo docker exec d68e3b96071e cat /etc/os-release
PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”
NAME=”Debian GNU/Linux”
VERSION_ID=”9″
VERSION=”9 (stretch)”
ID=debian
HOME_URL=”https://www.debian.org/
SUPPORT_URL=”https://www.debian.org/support
BUG_REPORT_URL=”https://bugs.debian.org/

I was able to do so because Docker containers do not have their own kernel, but rather share the kernel of the underlying host via linux system calls (e.g setuid, stat, umount, ls) like any other application. These system calls (or syscalls for short) are standard across kernels, and Docker supports version 3.10 and higher. In the event older syscalls are deprecated and replaced with new ones, you can update the kernel of the underlying host, which can be done independently of an OS upgrade. As far as containers go, the binaries and aptitude management tools are the same as if you installed Ubuntu on an EC2 instance (or VM).

Q: But I’m running a windows environment. Those OS’s don’t have a kernel. 

Yes, developers may want to remove cost overhead associated with Windows licenses by exploring running their apps on Linux OS. Others may simply want to modernize their .NET applications by testing out the latest versions on Containers. Docker allows you to run Linux VM’s on Windows 10 and Server 2016. As docker was written to initially execute on Linux distributions, in order to take advantage of multitenant hosting, you will have to run Hyper-V containers, which provision a thin VM on top of your hosts. You can then manage your mixed environment of Windows and Linux hosts via the –isolate option. More information can be found in the Microsoft and Docker documentation.

Conclusion:

IT teams need to be able to help drive the business forward. Newer technologies and security patches are procured on a daily basis. Developers need to be able to freely work on modernizing their code and applications. Concurrently, Operations needs to be able to support and enhance the pipelines and platforms that get the code out faster and securely. Leveraging Docker containers in conjunction with these pipelines further helps to ensure these are both occurring in parallel without the unnecessary overhead. This allows teams to work independently in the early stages of the development cycle and yet more collaboratively to get the releases out the door.

For help getting started leveraging your environment to take advantage of containerization, contact us.

-Sabine Blair, Systems Engineer & Cloud Consultant

Facebooktwitterlinkedinmailrss

The Most Popular AWS Products of 2018

Big Data and Machine Learning Services Lead the Way

If you’ve been reading this blog, or otherwise following the enterprise tech market, you know that the worldwide cloud services market is strong. According to Gartner, the market is projected to grow by 17% in 2019, to over $206 billion.

Within that market, enterprise IT departments are embracing cloud infrastructure and related services like never before. They’re attracted to tools and technologies that enable innovation, cost savings, faster-time-to-market for new digital products and services, flexibility and productivity. They want to be able to scale their infrastructure up and down as the situation warrants, and they’re enamored with the idea of “digital transformation.”

In its short history, cloud infrastructure has never been more exciting. At 2nd Watch, we are fortunate to have a front-row seat to the show, with more than 400 enterprise workloads under management and over 200,000 instances in our managed public cloud. With 2018 now in our rearview mirror, we thought this a good time for a quick peek back at the most popular Amazon Web Services (AWS) products of the past year. We aggregated and anonymized our AWS customer data from 2018, and here’s what we found:

The top five AWS products of 2018 were: Amazon Virtual Private Cloud (used by 100% of 2nd Watch customers); AWS Data Transfer (100%); Amazon Simple Storage Service (100%); Amazon DynamoDB (100%) and Amazon Elastic Compute Cloud (100%). Frankly, the top five list isn’t surprising. It is, however, indicative of legacy workloads and architectures being run by the enterprise.

Meanwhile, the fastest-growing AWS products of 2018 were: Amazon Athena (68% CAGR, as measured by dollars spent on this service with 2nd Watch in 2018 v. 2017); Amazon Elastic Container Service for Kubernetes (53%); Amazon MQ (37%); AWS OpsWorks (23%); Amazon EC2 Container Service (21%); Amazon SageMaker (21%); AWS Certificate Manager (20%); and AWS Glue (16%).

The growth in data services like Athena and Glue, correlated with Sagemaker, is interesting. Typically, the hype isn’t supported by the data, but clearly, customers are moving forward with big data and machine learning strategies. These three services were also the fastest growing services in Q4 2018.

Looking ahead, I expect EKS to be huge this year, along with Sagemaker and serverless. Based on job postings and demand in the market, Kubernetes is the most requested skill set in the enterprise. For a look at the other AWS products and services that rounded out our list for 2018, download our infographic.

– Chris Garvey, EVP Product

Facebooktwitterlinkedinmailrss

Top 5 takeaways from AWS re:Invent 2018

While AWS re:Invent 2018 is still fresh in our minds, let’s take a look at some of the most significant and exciting AWS announcements made. Here are our top five takeaways from AWS re:Invent 2018.

Number 5: AWS DeepRacer

To be honest, when I first saw DeepRacer I wasn’t paying full attention to the keynote.  After previous years’ announcements of Amazon Snowball and Snowmobile, I thought this might be the next version of how AWS is going to be moving data around. Instead we have an awesome little car that will give people exposure to programming and machine learning in a fun and interesting way. I know people at 2nd Watch are hoping to form a team so that we can compete at the AWS races. Anything that can get people to learn more about machine learning is a good thing as so many problems could be solved elegantly with machine learning solutions.

Number 4: Amazon Managed Blockchain and Amazon Quantum Ledger Database

Amazon has finally plunged directly into the Blockchain world that seems to get so much media attention these days.  Built upon the Amazon Quantum Ledger Database (QLDB), Amazon Managed Blockchain will give you the ability to integrate with the Ethereum and Hyperledger Fabric. QLDB will allow you to store information in a way so that transactions can never be lost or modified.  For instance, rather than storing security access in a log file or a database you can store transactions in the QLDB.  This will make it easy to guarantee integrity of the security access for audit purposes.

Number 3: RDS on VMWare

Having worked with many companies that are concerned about moving into the cloud, RDS on VMWare could be a great first step on their journey to the cloud. Rather than taking the full plunge into the cloud, companies will be able to utilize RDS instances in their existing VMWare environments.  Since databases are such a critical piece of infrastructure, much of the initial testing can be done on-premises.  You can set up RDS on VMWare in your dev environment alongside your current dev databases and begin testing without ever needing to touch things in AWS.  Then, once you’re ready to move the rest of your infrastructure to the cloud, you’ll have one less critical change you’ll have to make.

Number 2: AWS Outposts

EC2 instances in your datacenter – and not just EC2 instances, but pretty much anything that uses EC2 under the hood (RDS, EMR, Sagemaker, etc.) – will be able to run out of your datacenter.  The details are a little scant, but it sounds as though AWS is going to send you rack mount servers with some amount of storage built into them.  You’ll rack them, power them, plug them into your network and be good to go.  From a network perspective, it sounds like these instances will be able to show up as a VPC but also be able to connect directly into your private network. For users that aren’t ready to migrate to the cloud for whatever reason, Outposts could be the perfect way to start extending into AWS.

Number 1: AWS Transit Gateway

AWS Transit Gateway is a game changer for companies with many VPCs, VPNs, and eventually Direct Connect connections.  At 2nd Watch we help companies design their cloud infrastructure as simply and elegantly as possible. When it comes to interconnecting VPC’s, the old ways were always very painful and manually intensive.  With Transit Gateways you’ll have one place to go to manage all of your VPC interconnectivity.  The Transit Gateway will act as a hub and ensure that your data can be routed safely and securely. This will make managing all of your AWS interconnectivity much easier!

-Philip Seibel, Managing Cloud Consultant

Facebooktwitterlinkedinmailrss

AWS re:Invent 2018: Product Reviews & Takeaways

Interesting Takeaways

AWS re:Invent always has new product launches. The “new toys” are usually the ones that catch the most coverage, but there are a few things we feel are quite interesting coming out of re:Invent 2018 and decided they’d fit in their own section. Some are new products or additions to old products and some are based on the conversations or sessions heard around the event. Read on for our take on things!

AWS Marketplace for Containers

Announced at the Global Partner Summit keynote, the AWS Marketplace for Containers is the next logical step in the Marketplace ecosystem. Vendors will now be able to offer container solutions for their products, just as they do with AWS EC2 AMIs. The big takeaway here is just how important containerization is and how much of a growth we see in the implementation of containerized products and serverless architectures in general. Along with the big announcements around AWS Lambda, this just solidifies the push in the industry to adopt serverless models for their applications.

AWS Marketplace – Private Marketplace

The AWS Marketplace has added the Private Marketplace to its feature set. You can now have your own marketplace that’s shared across your AWS Organizations. This is neat and all, but I think what’s even more interesting is what it hints at in the background. It seems to me that in order to have a well established marketplace at all, your organization is going to need to be journeying on that DevOps trail: smaller teams who own and deploy focused applications (in this case, internally). I think it shows that a good deployment pipeline is really the best way to handle a project, regardless if it’s for external customers or internal customers.

Firecracker

This looks really cool. Firecracker is a virtualization tool that is built specifically for microVMs and function-based services (like Lambda or Fargate). It runs on bare metal… wait, what? I thought we’re trying to move AWAY from our own hosted servers?! That’s true, and I’ll be honest, I don’t think many of our customers will be utilizing it. However, consider all the new IoT products and features that were announced at the conference and you’ll see there’s still a lot of bare metal, both in use AND in development! I don’t think Firecracker is meant solely for large server farm type setups, but quite possibly for items in the IoT space. The serverless / microservice architecture is a strong one, and this allows that to happen in the IoT space. I’m currently working on installing it onto my kids’ minecraft micro computer. Do I smell another blog post?

Andy Jassy Says What?

In the fireside chat with Andy Jassy in the partner keynote, there were several things I found interesting, albeit not surprising (moving away from Oracle DB), but there was one that stood out above the rest:

I hear enterprises, all the time, wanting help thinking about how they can innovate at a faster clip. And, you know, it’s funny, a lot of the enterprise EBC’s I get to be involved in… I’d say roughly half the content of those are enterprises asking me about our offering and how we think about our business and what we have planned in the future, but a good chunk of every one of those conversations are enterprises trying to learn how we move quickly and how we invent quickly, and I think that enterprises realize that in this day and age if you are not reinventing fast and iterating quickly on behalf of your customers, it’s really difficult to be competitive. And so I think they want help from you in how to invent faster. Now, part of that is being able to operate on top of the cloud and operate on top of a platform like AWS that has so many services that you can stitch together however you see fit. Some of it also is, how do people think about DevOps? How do people think about organizing their teams? You know… what are the right constraints that you have but that still allow people to move quickly.

He said DevOps! So larger companies that are looking to change don’t just want fancy tools and fancy technology, but they also need help getting better at affecting change. That’s absolutely outside the wheelhouse of AWS, but I think it’s very interesting that he specifically called that out, and called it out during the partner keynote. If you’re interested in learning more about any of these announcements, contact us.

-Lars Cromley, Director of Engineering

Facebooktwitterlinkedinmailrss

AWS re:Invent Breakout Session – Proven Methodologies for Accelerating Cloud Journey

With a week full of sessions, bootcamps and extra-curriculars at AWS re:Invent 2018, you might not have had time to make it to our breakout session. Watch “Proven Methodologies for Accelerating Your Cloud Journey” on-demand now to see what you missed.

Learn how to accelerate your journey to the cloud while implementing a cloud-first strategy without sacrificing the controls and standards required in a large, publicly-traded enterprise.  Benefit from insights developed from working with some of the most recognized brands in the world. Discover how these household names leverage automation, CI / CD, and a modular approach to workload design to ensure consistent application of their security and governance requirements. Learn which approaches to use when transforming workloads to cloud native technologies, including serverless and containers.  With this approach, business users can finally receive properly governed resources without delaying or disrupting their need for agility, flexibility and cloud scale.

Facebooktwitterlinkedinmailrss