1-888-317-7920 info@2ndwatch.com

2020 Predictions: Multicloud

Multicloud has risen to the fore in 2019 as customers continue to migrate to the cloud and build out a variety of cloud environments.

When it comes to multicloud, it offers obvious benefits of not being locked in with a single provider, as well as being able to try varying platforms. But how far have customers actually gotten when it comes to operating multicloud environments? And what does 2020 hold for the strategy?

Adoption

As 2020 approaches and datacenter leases expire, we can expect to see continued cloud adoption with the big public cloud players – Amazon and Azure in particular. Whether a move to a multicloud environment is in the cards or whether that may be a step too far for firms that are already nervous about shifting from a hosted datacenter to the public cloud is a question cloud providers are eager to get answers to.

But there isn’t a simple answer, of course.

We have to remember that with a multicloud solution, there has to be a way to migrate or move workloads between the clouds, and one of the hurdles multicloud adoption is going to face in 2020 is organizations not yet having the knowledge base when it comes to different cloud platforms.

What we may well see is firms taking that first step and turning to VMware or Kubernetes – an opensource container orchestration platform – as a means to overlay native cloud services in order to adopt multicloud strategies. At VMworld in August, the vendor demonstrated VMs being migrated between Azure and AWS, something users can start to become familiar with in order to build their knowledge of cloud migrations and, therefore, multicloud environments.

For multicloud in 2020 this means not so much adoption, but awareness and investigation. Those organizations using an overlay like VMware to operate a multicloud environment can do so without having deep cloud expertise and sophistication in-house. This may be where multicloud takes off in 2020. Organizations wouldn’t necessarily need to know (or care) how to get between their clouds, they would have the ability to bounce between Azure, Amazon and Google Cloud via their VMware instead.

Still, as we’re moving into a multicloud world and companies start to gravitate towards a multicloud model, they’re going to see that there are multiple ways to utilize it. They will want to understand it and investigate it further, which will naturally lead to questions as to how it can serve their business. And at the moment, the biggest limiter is not having this in-house knowledge to give organizations that direction. Most firms don’t yet have one single person that knows Amazon or Azure at a sophisticated enough level to comfortably answer questions about the individual platforms, let alone how they can operate together in a multicloud environment.

What this means is that customers do a lot of outsourcing when it comes to managing their cloud environment, particularly in areas like PaaS, IaaS, Salesforce and so on. As a result, organizations are starting to understand how they can use these cloud technologies for their internal company processes, and they’re asking, ‘Why can’t we use the rest of the cloud as well, not just for this?’ This will push firms to start investigating multicloud more in 2020 and beyond – because they will realize they’re already operating elements of a multicloud environment and their service providers can advise them on how to build on that.

Adoption steps

For firms thinking about adopting a multicloud environment – even those who may not feel ready yet – it’s a great idea to start exploring a minimum of two cloud providers. This will help organizations get a feel for the interface and services, which will lead to an understanding of how a multicloud environment can serve their business and which direction to go in.

It’s also a good idea to check out demos of the VMware or Kubernetes platforms to see where they might fit in.

And lastly, engage early with Amazon, Azure and VMware or a premier partner like 2nd Watch. Companies seeking a move to the cloud are potentially missing out on monies set aside for migration assistance and adoption.

What will 2020 bring?

2020 is certainly set to see multicloud questions being asked, but it’s likely that hybrid cloud will be more prevalent than multicloud. Why? Because customers are still trying to decide if they want to get into cloud rather than think about how they can utilize multiple clouds in their environment. They just aren’t there yet.

As customers still contemplate this move to the cloud, it’s much more likely that they will consider a partial move – the hybrid cloud – to begin with, as it gives them the comfort of knowing they still hold some of their data on-premise, while they get used to the idea of the public cloud. This is especially true of customers in highly regulated industries, such as finance and healthcare.

What does this mean for multicloud? A wait. The natural step forward from hybrid cloud is multicloud, but providers will need to accept that it’s going to take time and we’re simply not quite there yet, nor will we be in 2020.

But we will be on the way – well on the way – as customers take a step further along the logical path to a multicloud future. 2020 may not be the year of multicloud, but it will be the start of a pretty short journey there.

-Jason Major, Principal Cloud Consultant

-Michael Moore, Associate Cloud Consultant

Facebooktwitterlinkedinmailrss

The Cloudcast Podcast with Jeff Aden, Co-Founder and EVP at 2nd Watch

The Cloudcast’s Aaron and Brian talk with Jeff Aden, Co-Founder and EVP at 2nd Watch, about the evolution of 2nd Watch as a Cloud Integrator as AWS has grown and shifted its focus from startups to enterprise customers. Listen to the podcast at http://www.thecloudcast.net/2019/02/evolution-of-public-cloud-integrator.html.

Topic 1 – Welcome to the show Jeff. Tell us about your background, the founding of 2nd Watch, and how the company has evolved over the last few years.

Topic 2 – We got to know 2nd Watch at one of the first AWS re:Invent shows, as they had one of the largest booths on the floor. At the time, they were listed as one of AWS’s best partners. Today, 2nd Watch provides management tools, migration tools, and systems-integration capabilities. How does 2nd Watch think of themselves?

Topic 3 –  What are the concerns of your customers today, and how does 2nd Watch think about matching customer demands and the types of tools/services/capabilities that you provide today?

Topic 4 – We’d like to pick your brain about the usage and insights you’re seeing from your customers’ usage of AWS. It’s mentioned that 100% are using DynamoDB, 53% are using Elastic Kubernetes, and a fast growing section is using things likes Athena, Glue and Sagemaker. What are some of the types of applications that you’re seeing customer build that leverage these new models? 

Topic 5 – With technologies like Outpost being announced, after so many years of AWS saying “Cloud or legacy Data Center,” how do you see this impacting the thought process of customers or potential customers?

Facebooktwitterlinkedinmailrss

Migrating to Terraform v0.10.x

When it comes to managing cloud-based resources, it’s hard to find a better tool than Hashicorp’s Terraform. Terraform is an ‘infrastructure as code’ application, marrying configuration files with backing APIs to provide a nearly seamless layer over your various cloud environments. It allows you to declaratively define your environments and their resources through a process that is structured, controlled, and collaborative.

One key advantage Terraform provides over other tools (like AWS CloudFormation) is having a rapid development and release cycle fueled by the open source community. This has some major benefits: features and bug fixes are readily available, new products from resource providers are quickly incorporated, and you’re able to submit your own changes to fulfill your own requirements.

Hashicorp recently released v0.10.0 of Terraform, introducing some fundamental changes in the application’s architecture and functionality. We’ll review the three most notable of these changes and how to incorporate them into your existing Terraform projects when migrating to Terraform v.0.10.x.

  1. Terraform Providers are no longer distributed as part of the main Terraform distribution
  2. New auto-approve flag for terraform apply
  3. Existing terraform env commands replaced by terraform workspace

A brief note on Terraform versions:

Even though Terraform uses a style of semantic versioning, their ‘minor’ versions should be treated as ‘major’ versions.

1. Terraform Providers are no longer distributed as part of the main Terraform distribution

The biggest change in this version is the removal of provider code from the core Terraform application.

Terraform Providers are responsible for understanding API interactions and exposing resources for a particular platform (AWS, Azure, etc). They know how to initialize and call their applications or CLIs, handle authentication and errors, and convert HCL into the appropriate underlying API calls.

It was a logical move to split the providers out into their own distributions. The core Terraform application can now add features and release bug fixes at a faster pace, new providers can be added without affecting the existing core application, and new features can be incorporated and released to existing providers without as much effort. Having split providers also allows you to update your provider distribution and access new resources without necessarily needing to update Terraform itself. One downside of this change is that you have to keep up to date with features, issues, and releases of more projects.

The provider repos can be accessed via the Terraform Providers organization in GitHub. For example, the AWS provider can be found here.

Custom Providers

An extremely valuable side-effect of having separate Terraform Providers is the ability to create your own, custom providers. A custom provider allows you to specify new or modified attributes for existing resources in existing providers, add new or unsupported resources in existing providers, or generate your own resources for your own platform or application.

You can find more information on creating a custom provider from the Terraform Provider Plugin documentation.

1.1 Configuration

The nicest part of this change is that it doesn’t really require any additional modifications to your existing Terraform code if you were already using a Provider block.

If you don’t already have a provider block defined, you can find their configurations from the Terraform Providers documentation.

You simply need to call the terraform init command before you can perform any other action. If you fail to do so, you’ll receive an error informing you of the required actions (img 1a).

After successfully reinitializing your project, you will be provided with the list of providers that were installed as well as the versions requested (img 1b).

You’ll notice that Terraform suggests versions for the providers we are using – this is because we did not specify any specific versions of our providers in code. Since providers are now independently released entities, we have to tell Terraform what code it should download and use to run our project.

(Image 1a: Notice of required reinitialization)

Picture1a

 

 

 

 

 

 

 

 

(Image 1b: Response from successful reinitialization)

Picture1b

 

 

 

 

 

 

 

 

Providers are released separately from Terraform itself, and maintain their own version numbers.

You can specify the version(s) you want to target in your existing provider blocks by adding the version property (code block 1). These versions should follow the semantic versioning specification (similar to node’s package.json or python’s requirements.txt).

For production use, it is recommended to limit the acceptable provider versions to ensure that new versions with breaking changes are not automatically installed.

(Code Block 1: Provider Config)

provider "aws" {
  version = "0.1.4"
  allowed_account_ids = ["1234567890"]
  region = "us-west-2"
}

 (Image 1c: Currently defined provider configuration)

Picture1c

 

 

 

 

 

 

 

 

2. New auto-approve flag for terraform apply

In previous versions, running terraform apply would immediately apply any changes between your project and saved state.

Your normal workflow would likely be:
run terraform plan followed by terraform apply and hope nothing changed in between.

This version introduced a new auto-approve flag which will control the behavior of terraform apply.

Deprecation Notice

This flag is set to true to maintain backwards compatibility, but will quickly change to false in the near future.

2.1 auto-approve=true (current default)

When set to true, terraform apply will work like it has in previous versions.

If you want to maintain this functionality, you should upgrade your scripts, build systems, etc now as this default value will change in a future Terraform release.

(Code Block 2: Apply with default behavior)

# Apply changes immediately without plan file
terraform apply --auto-approve=true

2.2 auto-approve=false

When set to false, Terraform will present the user with the execution plan and pause for interactive confirmation (img 2a).

If the user provides any response other than yes, terraform will exit without applying any changes.

If the user confirms the execution plan with a yes response, Terraform will then apply the planned changes (and only those changes).

If you are trying to automate your Terraform scripts, you might want to consider producing a plan file for review, then providing explicit approval to apply the changes from the plan file.

(Code Block 3: Apply plan with explicit approval)

# Create Plan
terraform plan -out=tfplan

# Apply approved plan
terraform apply tfplan --auto-approve=true

(Image 2a: Terraform apply with execution plan)

Picture2a

 

 

 

 

 

 

3. Existing terraform env commands replaced by terraform workspace

The terraform env family of commands were replaced with terraform workspace to help alleviate some confusion in functionality. Workspaces are very useful, and can do much more than just split up environment state (which they aren’t necessarily used for). I recommend checking them out and seeing if they can improve your projects.

There is not much to do here other than switch the command invocation, but the previous commands still currently work for now (but are deprecated).

 

License Warning

You are using an UNLICENSED copy of Scroll Office.

Do you find Scroll Office useful?
Consider purchasing it today: https://www.k15t.com/software/scroll-office

 

— Steve Byerly, Principal SDE (IV), Cloud, 2nd Watch

Facebooktwitterlinkedinmailrss

The Most Popular AWS Products of 2016

We know from the past 5 years of Gartner Magic Quadrants that AWS is a leader among IaaS vendors, placing the furthest for ‘completeness of vision’ and ‘ability to execute.’ AWS’ rapid pace of innovation contributes to its position as the leader in the space. The cloud provider releases hundreds of product and service updates every year. So, which of those are the most popular amongst our enterprise clients?

We analyzed data from our customers for the year, from a combined 100,000+ instances running monthly. The most popular AWS products and services, represented by the percentage of 2nd Watch customers utilizing them in 2016, include Amazon’s two core services for compute and storage – EC2 and S3 – and Amazon Data Transfer, each at 100% usage. Other high-ranking products include Simple Queue Service (SQS) for message queuing (84%) and Amazon Relational Database Service or RDS (72%). Usage for these services remains fairly consistent, and we would expect to see these services across most AWS deployments.

There are some relatively new AWS products and services that made the “most-popular” list for 2016 as well. AWS Lambda serverless computing (38%), Amazon WorkSpaces, a secure virtual desktop service (27%), and Kinesis, a real-time streaming data platform (12%), are quickly being adopted by AWS users and rising in popularity.

The fas-growing services in 2016, based on CAGR, include AWS CloudTrail (48%), Kinesis (30%), Config for resource inventory, configuration history, and change notifications (24%), Elasticsearch Service for real-time search and analytics (22%), Elastic MapReduce, a tool for big data processing and analysis, (20%) and Redshift, the data warehouse service alternative to systems from HP, Oracle and IBM (14%).

The accelerated use of these products demonstrates how quickly new cloud technologies are becoming the standard in today’s evolving market. Enterprises are moving away from legacy systems to cloud platforms for everything from back-end systems to business-critical, consumer-facing assets. We expect growth in each of these categories to continue as large organizations realize the benefits and ease of using these technologies.

Download the 30 Most Popular AWS Products infographic to find out which others are in high-demand.

-Jeff Aden, Co-Founder & EVP Business Development & Marketing

Facebooktwitterlinkedinmailrss