2nd Watch Enhances Managed Optimization service in partnership with Spot by NetApp

Today, we’re excited to announce a new enhancement to our Managed Optimization service – Spot Instance and Container Optimization – for enterprise IT departments looking to more thoughtfully allocate cloud resources and carefully manage cloud spend.

Enterprises using cloud infrastructure and services today are seeing higher cloud costs than anticipated due to factors such as cloud sprawl, shadow IT, improper allocation of cloud resources, and a failure to use the most efficient resource based on workload. To address these concerns, we take a holistic approach to Optimization and have partnered with Spot by NetApp to enhance our Managed Optimization service.

The service works by recommending workloads that can take advantage of the cost savings associated with running instances, VMs and containers on “spot” resources. A spot resource is an unused cloud resource that is available for sale in a marketplace for less than the on-demand price. Because spot resources enable users to request unused EC2 instances or VMs to run their workloads at steep discounts, users can significantly lower their cloud compute costs, up to 90% by some measures. To deliver its service, we’re partnering with Spot, whose cloud automation and optimization solutions help companies maximize return on their cloud investments.

“Early on, spot resources were difficult to manage, but the tasks associated with managing them can now be automated, making the use of spot a smart approach to curbing cloud costs,” says Chris Garvey, EVP of Product at 2nd Watch. “Typically, non-mission critical workloads such as development and staging have been able to take advantage of the cost savings of spot instances.

By combining 2nd Watch’s expert professional services, managed cloud experience and solutions from Spot by NetApp, 2nd Watch has been able to help companies use spot resources to run production environments.”

“Spot by NetApp is thrilled to be working with partners like  2nd Watch to help customers maximize the value of their cloud investment,” says Amiram Shachar, Vice President and General Manager of Spot by NetApp.  “Working together, we’re helping organizations go beyond one-off optimization projects to instead ensure continuous optimization of their cloud environment using Spot’s unique technology. With this new offering, 2nd Watch demonstrates a keen understanding of this critical customer need and is leveraging the best technology in the market to address it.”

Using Docker Containers to Move Your Internal IT Orgs Forward

Many people are looking to take advantage of containers to isolate their workloads on a single system. Unlike traditional hypervisor-based virtualization, which utilizes the same operating system and packages, Containers allow you to segment off multiple applications with their own set of processes on the same instance.

Let’s walk through some grievances that many of us have faced at one time or another in our IT organizations:

Say, for example, your development team is setting up a web application. They want to set up a traditional 3 tier system with an app, database, and web servers. They notice there is a lot of support in the open source community for their app when it is run on Ubuntu Trusty (Ubuntu 14.04 LTS) and later. They’ve developed the app in their local sandbox with an Ubuntu image they downloaded, however, their company is a RedHat shop.

Now, depending on the type of environment you’re in, chances are you’ll have to wait for the admins to provision an environment for you. This often entails (but is not limited to) spinning up an instance, reviewing the most stable version of the OS, creating a new hardened AMI, adding it to Packer, figuring out which configs to manage, and refactoring provisioning scripts to utilize aptitude and Ubuntu’s directory structure (e.g Debian has over 50K packages to choose from and manage). In addition to that, the most stable version of Ubuntu is missing some newer packages that you’ve tested in your sandbox that need to be pulled from source or another repository. At this point, the developers are procuring configuration runbooks to support the app while the admin gets up to speed with the OS (not significant but time-consuming nonetheless).

You can see my point here. A significant amount of overhead has been introduced, and it’s stagnating development. And think about the poor sysadmins. They have other environments that they need to secure, networking spaces to manage, operations to improve, and existing production stacks they have to monitor and support while getting bogged down supporting this app that is still in the very early stages of development. This could mean that mission-critical apps are potentially losing visibility and application modernization is stagnating. Nobody wins in this scenario.

Now let us revisit the same scenario with containers:

I was able to run my Jenkins build server and an NGINX web proxy, both running on a hardened CentOS7 AMI provided by the Systems Engineers with docker installed.  From there I executed a docker pull  command pointed at our local repository and deployed two docker images with Debian as the underlying OS.

$ docker pull my.docker-repo.com:4443/jenkins
$ docker pull my.docker-repo.com:4443/nginx

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

$ docker ps

7478020aef37 my.docker-repo.com:4443/jenkins/jenkins:lts   “/sbin/tini — /us …”  16 minutes ago   Up 16 minutes ago  8080/tcp, 0.0.0.0:80->80/tcp, 50000/tcp jenkins

d68e3b96071e my.docker-repo.com:4443/nginx/nginx:lts “nginx -g ‘daemon of…” 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

$ sudo systemctl status jenkins-docker

jenkins-docker.service – Jenkins
Loaded: loaded (/etc/systemd/system/jenkins-docker.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-11-08 17:38:06 UTC; 18min ago
Process: 2006 ExecStop=/usr/local/bin/jenkins-docker stop (code=exited, status=0/SUCCESS)

The processes above were executed on the actual instance. Note how I’m able to execute a cat of the OS release file from within the container

sudo docker exec d68e3b96071e cat /etc/os-release
PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”
NAME=”Debian GNU/Linux”
VERSION_ID=”9″
VERSION=”9 (stretch)”
ID=debian
HOME_URL=”https://www.debian.org/
SUPPORT_URL=”https://www.debian.org/support
BUG_REPORT_URL=”https://bugs.debian.org/

I was able to do so because Docker containers do not have their own kernel, but rather share the kernel of the underlying host via linux system calls (e.g setuid, stat, umount, ls) like any other application. These system calls (or syscalls for short) are standard across kernels, and Docker supports version 3.10 and higher. In the event older syscalls are deprecated and replaced with new ones, you can update the kernel of the underlying host, which can be done independently of an OS upgrade. As far as containers go, the binaries and aptitude management tools are the same as if you installed Ubuntu on an EC2 instance (or VM).

Q: But I’m running a windows environment. Those OS’s don’t have a kernel. 

Yes, developers may want to remove cost overhead associated with Windows licenses by exploring running their apps on Linux OS. Others may simply want to modernize their .NET applications by testing out the latest versions on Containers. Docker allows you to run Linux VM’s on Windows 10 and Server 2016. As docker was written to initially execute on Linux distributions, in order to take advantage of multitenant hosting, you will have to run Hyper-V containers, which provision a thin VM on top of your hosts. You can then manage your mixed environment of Windows and Linux hosts via the –isolate option. More information can be found in the Microsoft and Docker documentation.

Conclusion:

IT teams need to be able to help drive the business forward. Newer technologies and security patches are procured on a daily basis. Developers need to be able to freely work on modernizing their code and applications. Concurrently, Operations needs to be able to support and enhance the pipelines and platforms that get the code out faster and securely. Leveraging Docker containers in conjunction with these pipelines further helps to ensure these are both occurring in parallel without the unnecessary overhead. This allows teams to work independently in the early stages of the development cycle and yet more collaboratively to get the releases out the door.

For help getting started leveraging your environment to take advantage of containerization, contact us.

-Sabine Blair, Systems Engineer & Cloud Consultant

AWS re:Invent 2018: Product Reviews & Takeaways

Interesting Takeaways

AWS re:Invent always has new product launches. The “new toys” are usually the ones that catch the most coverage, but there are a few things we feel are quite interesting coming out of re:Invent 2018 and decided they’d fit in their own section. Some are new products or additions to old products and some are based on the conversations or sessions heard around the event. Read on for our take on things!

AWS Marketplace for Containers

Announced at the Global Partner Summit keynote, the AWS Marketplace for Containers is the next logical step in the Marketplace ecosystem. Vendors will now be able to offer container solutions for their products, just as they do with AWS EC2 AMIs. The big takeaway here is just how important containerization is and how much of a growth we see in the implementation of containerized products and serverless architectures in general. Along with the big announcements around AWS Lambda, this just solidifies the push in the industry to adopt serverless models for their applications.

AWS Marketplace – Private Marketplace

The AWS Marketplace has added the Private Marketplace to its feature set. You can now have your own marketplace that’s shared across your AWS Organizations. This is neat and all, but I think what’s even more interesting is what it hints at in the background. It seems to me that in order to have a well established marketplace at all, your organization is going to need to be journeying on that DevOps trail: smaller teams who own and deploy focused applications (in this case, internally). I think it shows that a good deployment pipeline is really the best way to handle a project, regardless if it’s for external customers or internal customers.

Firecracker

This looks really cool. Firecracker is a virtualization tool that is built specifically for microVMs and function-based services (like Lambda or Fargate). It runs on bare metal… wait, what? I thought we’re trying to move AWAY from our own hosted servers?! That’s true, and I’ll be honest, I don’t think many of our customers will be utilizing it. However, consider all the new IoT products and features that were announced at the conference and you’ll see there’s still a lot of bare metal, both in use AND in development! I don’t think Firecracker is meant solely for large server farm type setups, but quite possibly for items in the IoT space. The serverless / microservice architecture is a strong one, and this allows that to happen in the IoT space. I’m currently working on installing it onto my kids’ minecraft micro computer. Do I smell another blog post?

Andy Jassy Says What?

In the fireside chat with Andy Jassy in the partner keynote, there were several things I found interesting, albeit not surprising (moving away from Oracle DB), but there was one that stood out above the rest:

I hear enterprises, all the time, wanting help thinking about how they can innovate at a faster clip. And, you know, it’s funny, a lot of the enterprise EBC’s I get to be involved in… I’d say roughly half the content of those are enterprises asking me about our offering and how we think about our business and what we have planned in the future, but a good chunk of every one of those conversations are enterprises trying to learn how we move quickly and how we invent quickly, and I think that enterprises realize that in this day and age if you are not reinventing fast and iterating quickly on behalf of your customers, it’s really difficult to be competitive. And so I think they want help from you in how to invent faster. Now, part of that is being able to operate on top of the cloud and operate on top of a platform like AWS that has so many services that you can stitch together however you see fit. Some of it also is, how do people think about DevOps? How do people think about organizing their teams? You know… what are the right constraints that you have but that still allow people to move quickly.

He said DevOps! So larger companies that are looking to change don’t just want fancy tools and fancy technology, but they also need help getting better at affecting change. That’s absolutely outside the wheelhouse of AWS, but I think it’s very interesting that he specifically called that out, and called it out during the partner keynote. If you’re interested in learning more about any of these announcements, contact us.

-Lars Cromley, Director of Engineering