Back to the Basics: The 3 Cloud Computing Service Delivery Models

In recent years, the adoption of cloud computing services has increased tremendously, especially given the onset of the pandemic. According to a report from the International Data Corporation (IDC), the public cloud services market grew 24.1% year over year in 2020. This increase in popularity is credited to the benefits provided by cloud including flexibility, on-demand capacity planning, cost reductions, and ability for users to access shared resources from anywhere.

No matter where you are in your cloud journey, understanding foundational concepts like the different types of cloud service models is important to your success in the cloud. These cloud computing service models provide different levels of control, flexibility, and management capabilities. With a greater understanding of the models, their benefits, and the different ways to deploy these infrastructures, you can determine the method that matches your business needs best.

What are the 3 Cloud Computing Service Delivery Models?

Different cloud computing service delivery models help meet different needs, and determining which model is best for you is an important first step when you transition to the cloud. The three major models are IaaS, PaaS, and SaaS.

Infrastructure as a Service (IaaS)

IaaS is one of the most flexible cloud computing models. The infrastructure and its features are presented in a completely remote environment, allowing clients direct access to servers, networking, storage, and availability zones. Additionally, IaaS environments have automated deployments, significantly speeding up your operations in comparison to manual deployments. Some examples of IaaS vendors include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. In these types of environments, the vendor is responsible for the infrastructure, but the users still have complete control over the Identity Access Management, data, applications, runtime, middleware, operating system, and virtual network.

Platform as a Service (PaaS)

Another cloud computing service delivery model is Platform as a Service (PaaS). PaaS is a subset of IaaS, except customers are only responsible for Identity Access Management, data, and applications and it removes the need for organizations to manage the underlying infrastructure. Rather than having the responsibility over hardware and operating systems as with IaaS, PaaS helps you focus on the deployment and management of your applications. There is less need for resource procurement, capacity planning, software maintenance, and patching. Some examples of PaaS include Windows Azure, Google AppEngine and AWS Elastic Beanstalk.

Software as a Service (SaaS)

Perhaps the most well-known of all three models is SaaS, where the deployment is redistributed to third party services. The customer’s only responsibilities are Identity Access Management, data, and the task of managing software. SaaS offers the entire package offered between IaaS and PaaS, as infrastructure, middleware, and applications deployed over the web can be seamlessly accessed from any place at any time, no matter the platform. Vendors of SaaS include CRM services like Salesforce and productivity software services like Google Apps. One major benefit of SaaS is that it reduces the costs of software ownership and eliminates the need for IT staff to manage the software so your company can focus on what it does best.  Another benefit of SaaS that its relevance to businesses today, as SaaS is considered the best option for remote collaboration. With SaaS, your applications can be accessed from any geographical location and your company is not responsible for managing the hardware.

Choosing the Cloud Computing Model that is Right for You

 Each cloud computing service model has different benefits to consider when determining the model that will work best for your business needs, projects, and goals.

While IaaS gives you complete control over your infrastructure, some businesses may decide they do not need to fully manage their applications and infrastructure on their own. IaaS is considered a good fit for SMEs and startups who do not have the resources or time to buy and build the infra for their own network. Additionally, larger companies may prefer to have complete control and scalability over their infrastructure, so they too may opt for IaaS for a pay-as-you go, remote option with powerful tools. One downside to IaaS is that it is more costly in comparison to PaaS and SaaS cloud computing models, yet it does minimize costs in the sense it eliminates the need to deploy on-premises hardware.

IaaS Benefits

  • Reduced vendor lock-in
  • Platform virtualizations
  • On-demand scaling
  • GUI and API-based access
  • Increased security
  • Multi-tenant architecture

IaaS Disadvantages

  • Potential for vendor outages
  • The cost of training how to manage new infrastructure

PaaS is a good choice if you are looking to decrease your application’s time-to-market, because of its remote flexibility and accessibility. Thus, if your project involves multiple developers and vendors, each have quick accessibility to computing and networking resources through a PaaS. PaaS might also be used by a team of developers to test software and applications.

PaaS Benefits

  • Rapid product development through simplified process
  • Custom solutions
  • Highly scalable
  • Eliminates need to manage basic infrastructure
  • Future-proof
  • Multi-tenant architecture

PaaS Disadvantages

  • Security issues
  • Increased dependency on vendor for speed and support

SaaS is a feasible option for smaller companies that need to launch their ecommerce quickly or for short term projects that require quick, easy, and affordable collaboration from either a web or mobile standpoint. Any company that requires frequent collaboration such as transferring content and scheduling meetings will find SaaS convenient and accessible.

SaaS Benefits

  • On-demand service
  • Automated provisioning/management of your cloud infrastructure
  • Subscription-based billing
  • Allows for full remote collaboration
  • Reduced software costs
  • Pay-as-you-go

SaaS Disadvantages

  • Less control
  • Limited solutions

The 3 Cloud Computing Deployment Models

Another foundational concept of cloud are the deployment models. A deployment model is where your infrastructure resides and also determines who has control over its management. Like the cloud computing service delivery models, it is also important to choose the deployment model that will best meet the needs of your business.

There are three types of cloud computing deployment models:

Public Cloud

A cloud deployment means your applications are fully run in the cloud and accessible by the public. Often, organizations will choose a public cloud deployment for scalability reasons or when security is not a main concern. For example, when testing an application. Businesses may choose to create or migrate applications to the cloud to take advantage of its benefits, such as its easy set-up and low costs. Additionally, a public cloud deployment allows for a cloud service provider to manage your cloud infrastructure for you.

On-Premises/Private

 An on-premises cloud deployment, or private cloud deployment, is for companies who need to protect and secure their data and are willing to pay more to do so. Since its on-premises, the data and infrastructure are accessed and managed by your own IT team. Due to in-house maintenance and fixed scalability, this deployment model is the costliest.

Hybrid

 A hybrid cloud deployment connects cloud-based resources and existing non-cloud resources that do not exist in the cloud. The most common way to do this is between a public cloud and on-premises infrastructure. Through a hybrid cloud integration, you can segment data according to the needs of your business. For example, putting your highly sensitive data on-premises while putting less-sensitive data on the public cloud for accessibility and cost-effectiveness. This allows you to enjoy the benefits of the cloud while maintaining a secure environment for your data.

Next Steps

Determining the cloud computing service delivery model and deployment model best for your organization are both critical steps to the success of your company’s cloud computing journey. Get it right the first time by consulting with 2nd Watch. With a decade of experience as a managed service provider, we provide cloud services for your public cloud workloads. As an AWS Consulting Partner, Gold Microsoft Partner, and Google Cloud Partner, our team has the knowledge and expertise to efficiently guide you through your cloud journey. Contact us to learn more or talk to one of our experts.

-Tessa Foley, Marketing


Standardizing & Automating Infrastructure Development Processes

Introduction

Let’s start with a small look at the current landscape of technology and how we arrived here. There aren’t very many areas of tech that have not been, or are not currently, in a state of fluctuation. Everything from software delivery vehicles and development practices, to infrastructure creation has experienced some degree of transformation over the past several years. From VMs to Containers, it seems like almost every day the technology tool belt grows a little bigger, and our world gets a little better (though perhaps more complex) due to these advancements. For me, this was incredibly apparent when I began to delve into configuration management which later evolved into what we now call “infrastructure as code”.

The transformation of the development process began with simple systems that we once used to manage a few machines (like bash scripts or Makefiles) which then morphed into more complex systems (CF Engine, Puppet, and Chef) to manage thousands of systems. As configuration management software became more mature, engineers and developers began leaning on them to do more things. With the advent of hypervisors and the rise of virtual machines, it was only a short time before hardware requests changed to API requests and thus the birth of infrastructure as a service (IaaS). With all the new capabilities and options in this brave new world, we once again started to lean on our configuration management systems—this time for provisioning, and not just convergence.

Provisioning & Convergence

I mentioned two terms that I want to clarify; provisioning and convergence. Say you were a car manufacturer and you wanted to make a car. Provisioning would be the step in which you request the raw materials to make the parts for your automobile. This is where we would use tools like Terraform, CloudFormation, or Heat. Whereas convergence is the assembly line by which we check each part and assemble the final product (utilizing config management software).

By and large, the former tends to be declarative with little in the way of conditionals or logic, while the latter is designed to be robust and malleable software that supports all the systems we run and plan on running. This is the frame for the remainder of what we are going to talk about.

By separating the concerns of our systems, we can create a clear delineation of the purpose for each tool so we don’t feel like we are trying to jam everything into an interface that doesn’t have the most support for our platform or more importantly our users. The remainder of this post will be directed towards the provisioning aspect of configuration management.

Standards and Standardization

These are two different things in my mind. Standardization is extremely prescriptive and can often seem particularly oppressive to professional knowledge workers, such as engineers or developers. It can be seen as taking the innovation away from the job. Whereas standards provide boundaries, frame the problem, and allow for innovative ways of approaching solutions. I am not saying standardization in some areas is entirely bad, but we should let the people who do the work have the opportunity to grow and innovate in their own way with guidance. The topic of standards and standardization is part of a larger conversation about culture and change. We intend to follow up with a series of blog articles relating to organizational change in the era of the public cloud in the coming weeks.

So, let’s say that we make a standard for our new EC2 instances running Ubuntu. We’ll say that all instances must be running the la official Canonical Ubuntu 14.04 AMI and must have these three tags; Owner, Environment, and Application. How can we enforce that in development of our infrastructure? On AWS, we can create AWS Config Rules, but that is reactive and requires ad-hoc remediation. What we really want is a more prescriptive approach bringing our standards closer to the development pipeline. One of the ways I like to solve this issue is by creating an abstraction. Say we have a terraform template that looks like this:

# Create a new instance of the la Ubuntu 14.04 on an
provider "aws" { region = "us-west-2"
}

data "aws_ami" "ubuntu" { most_recent = true

filter {
name	= "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}

filter {
name	= "virtualization-type" values = ["hvm"]
}

owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" { ami	=
"${data.aws_ami.ubuntu.id}" instance_type = "t2.micro"

tags {
Owner	= "DevOps Ninja" Environment = "Dev" Application = "Web01"
}
}

This would meet the standard that we have set forth, but we are relying on the developer or engineer to adhere to that standard. What if we enforce this standard by codifying it in an abstraction? Let’s take that existing template and turn it into a terraform module instead.

Module

# Create a new instance of the la Ubuntu 14.04 on an

variable "aws_region" {} variable "ec2_owner" {} variable "ec2_env" 
{} variable "ec2_app" {}
variable "ec2_instance_type" {}

provider "aws" {
region = "${var.aws_region}"
}

data "aws_ami" "ubuntu" { most_recent = true

filter {
name	= "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}

filter {
name	= "virtualization-type" values = ["hvm"]
}

owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" { ami	=
"${data.aws_ami.ubuntu.id}" instance_type =
"${var.ec2_instance_type}"

tags {
Owner	= "${var.ec2_owner}" Environment = "${var.ec2_env}" Application = 
"${var.ec2_app}"
}
}

Now we can have our developers and engineers leverage our tf_ubuntu_ec2_instance module.

New Terraform Plan

module "Web01" { source =
"git::ssh://git@github.com/SomeOrg/tf_u buntu_ec2_instance"

aws_region = "us-west-2" ec2_owner = "DevOps Ninja" ec2_env	= "Dev"
ec2_app	= "Web01"
}

This doesn’t enforce the usage of the module, but it does create an abstraction that provides an easy way to maintain standards without a ton of overhead, it also provides an example for further creation of modules that enforce these particular standards.

This leads us into another method of implementing standards but becomes more prescriptive and falls into the category of standardization (eek!). One of the most underutilized services in the AWS product stable has to be Service Catalog.

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

The Interface

Once we have a few of these projects in place (e.g. a service catalog or a repo full of composable modules for infrastructure that meet our standards) how do we serve them out? How you spur adoption of these tools and how they are consumed can be very different depending on your organization structure. We don’t want to upset workflow and how work gets done, we just want it to go faster and be more reliable. This is what we talk about when we mention the interface. Whichever way work flows in, we should supplement it with some type of software or automation to link those pieces of work together. Here are a few examples of how this might look (depending on your organization):

1.) Central IT Managed Provisioning

If you have an organization that manages requests for infrastructure, having this new shift in paradigm might seem daunting. The interface in this case is the ticketing system. This is where we would create an integration with our ticketing software to automatically pull the correct project from service catalog or module repo based on some criteria in the ticket. The interface doesn’t change but is instead supplemented by some automation to answer these requests, saving time and providing faster delivery of service.

2.) Full Stack Engineers

If you have engineers that develop software and the infrastructure that runs their applications this is the easiest scenario to address in some regards and the hardest in others. Your interface might be a build server, or it could simply be the adoption of an internal open source model where each team develops modules and shares them in a common place, constantly trying to save time and not re-invent the wheel.

Supplementing with software or automation can be done in a ton of ways. Check out an example Kelsey Hightower wrote using Jira.

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” – John Gall

All good automation starts out with a manual and well-defined process. Standardizing & automating infrastructure development processes begins with understanding how our internal teams can be best organized.  This allows us to efficiently perform work before we can begin automating. Work with your teammates to create a value stream map to understand the process entirely before doing anything towards the efforts of automating a workflow.

With 2nd Watch designs and automation you can deploy quicker, learn faster and modify as needed with Continuous Integration / Continuous Deployment (CI/CD). Our Workload Solutions transform on-premises workloads to digital solutions in the public cloud with next generation products and services.  To accelerate your infrastructure development so that you can deploy faster, learn more often and adapt to customer requirements more effectively, speak with a 2nd Watch cloud deployment expert today.

– Lars Cromley, Director of Engineering, Automation, 2nd Watch


Gartner Critical Capabilities for Public Cloud IaaS

Learn from Gartner what the Critical Capabilities for Public Cloud Infrastructure as a Service were for 2014.  Gartner evaluates 15 public cloud IaaS service providers, listed in the 2014 Magic Quadrant, against eight critical capabilities and for four common use cases your enterprise manages today.

Gartner takes an in-depth look at the critical capabilities for:

  • Application Development – for the needs of large teams of developers building new applications
  • Batch Computing – including high-performance computing (HPC), data analytics and other one-time (but potentially recurring), short-term, large-scale, scale-out workloads
  • Cloud Native Applications – for applications at any scale, which have been written with the strengths and weaknesses of public cloud IaaS in mind
  • General Business Applications – for applications not designed with the cloud in mind, but that can run comfortably in virtualized environments
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.