Cloud Economics: Empowering Organizations for Success

Cloud economics is crucial for an organization to make the most out of their cloud solutions, and business leaders need to prioritize shifting their company culture to embrace accountability and trackability.

When leaders hear the phrase “cloud economics,” they think about budgeting and controlling costs. Cost management is an element of cloud economics, but it is not the entire equation. In order for cloud economics to be implemented in a beneficial way, organizations must realize that cloud economics is not a budgetary practice, but rather an organizational culture shift.

The very definition of “economics” indicates that the study is more than just a numbers game. Economics is “a science concerned with the process or system by which goods and services are produced, sold, and bought.” The practice of economics involves a whole “process or system” where actors and actions are considered and accounted for. 

With this definition in mind, cloud economics means that companies are required to look at key players and behaviors when evaluating their cloud environment in order to maximize the business value of their cloud. 

Once an organization has fully embraced the study of cloud economics, it will be able to gain insight into which departments are utilizing the cloud, what applications and workloads are utilizing the cloud, and how all of these moving parts contribute to greater business goals. Embodying transparency and trackability enables teams to work together in a harmonious way to control their cloud infrastructure and prove the true business benefits of the cloud. 

If business leaders want to apply cloud economics to their organizations, they must go beyond calculating cloud costs. They will need to promote a culture of cross-functional collaboration and honest accountability. Leadership should prioritize and facilitate the joint efforts of cloud architects, cloud operations, developers, and the sourcing team. 

Cloud economics will encourage communication, collaboration, and change in culture, which will have the added benefit of cloud cost management and cloud business success. 

Where do companies lose control of their cloud costs?

When companies lose control of cloud costs, the business value of the cloud disappears as well. If the cloud is overspending and there is no business value to show for, how are leaders supposed to feel good about their cloud infrastructure? Going over budget with no benefits would not be a sound business case for any enterprise in any industry. 

Out-of-control cloud spending is quite easy, and it usually boils down to poor business decisions that come from leadership. Company leaders should first recognize that they wield the power to manage cloud costs and foster communication between teams. If they are making poor business decisions, like prioritizing speedy delivery over well-written code or not promoting transparency, then they are allowing practices that negatively impact cloud costs. 

When leaders push their teams to be fast rather than thorough, it creates technical debt and tension between teams. The following sub-optimal practices can happen when leadership is not prioritizing cloud cost optimizations:

  • Developers ignore seemingly small administrative tasks that are actually immensely important and consequential, like rightsizing infrastructure or turning off inactive applications. 
  • Architects select suboptimal designs that are easier and faster to run but are more expensive to implement.
  • Developers use inefficient code and crude algorithms in order to ship a feature faster, but then fail to consider performance optimizations to execute less resource consumption.
  • Developers forgo deployment automation that would help to automatically rightsize.
  • Developers build code that isn’t inherently cloud-native, and therefore not cloud-optimized.
  • Finance and procurement teams are only looking at the bottom line and don’t fully understand why the cloud bill is so high, therefore, creating tension between IT/dev and finance/procurement. 

When these actions compound, it leads to an infrastructure mess that is incredibly difficult to clean up. Poorly implemented bad designs that are not easily scalable will require a significant amount of development time; therefore, leaving companies with inefficient cloud infrastructure and preposterously high cloud costs.

Furthermore, these high and unexplained cloud bills cause rifts between teams and are detrimental to collaboration efforts. Lack of accountability and visibility causes developer and finance teams to have misaligned business objectives. 

Poor cloud governance and culture are derived from leadership’s misguided business decisions and muddled planning. If leaders don’t prioritize cloud cost optimization through cloud economics, the business value of the cloud is diminished, and company collaboration will suffer. Developers and architects will continue to execute processes that create high cloud costs, and finance and procurement teams will forever be at odds with the IT team.

What are the benefits of cloud economics?

Below are a few common business pitfalls that leaders can easily address if they embrace the practice of cloud economics:

  1. Cost Savings: The cloud eliminates the need for upfront hardware investments and reduces ongoing maintenance and operational costs. Organizations only pay for the resources they use, allowing for cost optimization and scalability.
  2. Infrastructure Efficiency: Cloud providers can achieve economies of scale by consolidating resources and optimizing data center operations. This results in higher infrastructure efficiency, reducing costs for businesses compared to managing their own on-premises infrastructure.
  3. Agility and Speed: The cloud enables rapid deployment and provisioning of resources, reducing the time and cost associated with traditional IT infrastructure setup. This agility allows businesses to quickly adapt to changing market demands and launch new products or services faster.
  4. Global Reach and Accessibility: Cloud services provide a global infrastructure footprint, allowing businesses to easily expand their operations into new regions without the need for physical infrastructure investments. This global reach enables faster access to customers and markets.
  5. Scalability and Elasticity: Cloud services offer the ability to scale resources up or down based on demand. This scalability eliminates the need for overprovisioning and ensures businesses have the necessary resources to handle peak workloads without incurring additional costs during idle periods.
  6. Improved Resource Utilization: Cloud providers optimize resource utilization through virtualization and efficient resource management techniques. This leads to higher resource utilization rates, reducing wasted capacity and maximizing cost efficiency.
  7. Business Continuity and Disaster Recovery: Cloud services provide built-in redundancy and disaster recovery capabilities, reducing the need for costly backup infrastructure and complex recovery plans. This improves business continuity while minimizing the financial impact of potential disruptions.
  8. Innovation and Competitive Edge: The cloud enables rapid experimentation and innovation, allowing businesses to quickly test and launch new products or services. This agility gives organizations a competitive edge in the market, driving revenue growth and differentiation.
  9. Focus on Core Business: By offloading infrastructure management to cloud providers, businesses can focus more on their core competencies and strategic initiatives. This shift in focus improves productivity and resource allocation, leading to better economic outcomes.
  10. Decentralized Costs and Budgets: Knowing budgets may seem obvious, but more often than not, leaders don’t even know what they are spending on the cloud. This is usually due to siloed department budgets and a lack of disclosure. Cloud economics requires leaders to create visibility into their cloud spend and open channels of communication about allocation, budgeting, and forecasting.
  11. Lack of Planning and Unanticipated Usage: If organizations don’t plan, then they will end up over-utilizing the cloud. Failing to forecast or proactively budget cloud resources will lead to using too many unnecessary and/or unused resources. With cloud economics, leaders are responsible for strategies, systems, and internal communications to connect cloud costs with business goals.
  12. Non-Committal Mindset: This issue is a culmination of other problems. If business leaders are unsure of what they are doing in the cloud, they are less willing to commit to long-term cloud contracts. Unwillingness to commit to contracts is a missed opportunity for business leaders because long-term engagements are more cost-friendly. Once leaders have implemented cloud economics to inspire confidence in their cloud infrastructure, they can assertively evaluate purchasing options in the most cost-effective way.

What are the steps to creating a culture around cloud economics?

Cloud economics is a study that goes beyond calculating and cutting costs. It is a company culture that is a cross-functional effort. Though it seems like a significant undertaking, the steps to get started are quite manageable. Below is a high-level plan that business leaders must take charge of to create a culture around prioritizing cloud economics:

1. Inform: Stage one consists of lots of data collecting and understanding of the current cloud situation. Company leaders will need to know what the trust costs of the cloud are before they can proceed forward. Creating visibility around the current state is also the first step to creating a culture of communication and transparency amongst teams and stakeholders.

2. Optimize: Once the baseline is understood, leadership can analyze the data in order to optimize cloud costs. The visibility of the current state is crucial for teams and leadership to understand what they are working with and how they can optimize it. This stage is where a lot of conversations happen amongst teams to come up with an optimization action plan. It requires teams and stakeholders to communicate and work together, which ultimately builds trust among each other.

3. Operate: Finally, the data analysis and learnings can be implemented. With the optimization action plan, leaders should know what areas of the cloud demand optimization first and how to optimize these areas. At this point in the process, teams and stakeholders are comfortable with cross-collaboration and honest communications amongst each other. This opens up a transparent feedback loop that is necessary for continuous improvement. 

Conclusion

The entire organization stands to gain when cloud economics is prioritized. A cost-efficient cloud infrastructure will lead to improved productivity, cross-functional collaboration between teams, and focused efforts towards greater business objectives. 

Ready to take control of your cloud costs and maximize the value of your cloud infrastructure? Contact 2nd Watch today and let our team of experts help you implement cloud economics within your organization. As a trusted partner for enterprise-level services and support, we have the expertise to assist you in planning, analyzing, and recommending strategies to optimize your cloud costs and drive business objectives. Don’t let cloud spending go unchecked. Take charge of your cloud economics by reaching out to a 2nd Watch cloud expert now

Mary Fellows | Director of Cloud Economics at 2ND Watch


Riding the Digital Transformation: Why Enterprises Are Reengineering Their Cloud Infrastructure

Post 2020, how are you approaching the cloud? The rapid and unexpected digital transformation of 2020 forced enterprises worldwide to quickly mobilize workers using cloud resources. Now, as the world returns to an altered normal, it’s time for organizations to revisit their cloud infrastructure components with a fresh perspective. Hybrid work environments, industry transformations, changing consumer behavior, and growing cyber threats have all effected the way we do business. Now it might be time to change your cloud.

Risk mitigation at scale

Avoiding potential missteps in your strategy requires both wide and narrow insights. With the right cloud computing infrastructure, network equipment, and operating systems, organizations can achieve better risk mitigation and management with cloud scalability. As you continue to pursue business outcomes, you have to solve existing problems, as well as plan for the future. Some of these problems include:

  • Scaling your cloud platform and infrastructure services quickly to keep up with increasing and/or unexpected demand.
  • Maximizing cloud computing services and computing power to accommodate storage, speed, and resource demands.
  • Prioritizing new and necessary investments and delivery models within a fixed budget.
  • Innovating faster to remain, or gain, competitive advantage.

Overall, to avoid risk, you need to gain efficiency, and that’s what the cloud can do. Cloud infrastructure, applications, and Software as a Service (SaaS) solutions are designed to decrease input, and increase output and effectiveness. The scalability of cloud services allows enterprises to continue growing and innovating, without requiring heavy investments. With continuous cloud optimization, you’re positioned to adapt, innovate, and succeed regardless of the unknown future.

Application modernization for data leverage

Much of the digital transformation started with infrastructure modernization and the development of IaaS as a base line. Now, application modernization is accelerating alongside a changing migration pattern. What used to be simply ‘lift and shift’ is now ‘lift and evolve.’ Enterprises want to collaborate with cloud experts to gain a deeper understanding of applications as they become more cloud native. With a constant pipeline of new applications and services, organizations need guidance to avoid cloud cost sprawl and streamline environment integration.

As application modernization continues, organizations are gaining access to massive amounts of data that are enabling brand new opportunities. This requires a new look at database architectures to make sure you’re unlocking value internally and potentially, externally. While application modernization and database architecture are interconnected, they can also transform separately. We’re starting to see people recognize the importance of strategic cloud transformations that include the entire data footprint – whether it’s the underlying architecture, or the top level analytics.

Organizations are getting out of long-term licensing agreements, monetizing their data, gaining flexibility, cutting costs, and driving innovation, customer value, and revenue. Data is pulled from, and fed into, a lot of different applications within constantly changing cloud environments, which brings both challenges and opportunities. Enterprises must transform from this to that, but the end goal is constantly changing as well. Therefore continuous motion is necessary within the digital transformation.

Changing core business strategies

One thing is for sure about the digital transformation – it’s not slowing down. Most experts agree that even after pandemic safety precautions are eliminated, the digital transformation will continue to accelerate. After seeing the speed of adoption and opportunities in the cloud, many enterprises are reevaluating the future with new eyes. Budgets for IT are expanding, but so is the IT skills gap and cybersecurity incidents. These transitions present questions in a new light, and enterprises should revisit their answers.

  • Why do you still have your own physical data center?
  • What is the value in outsourcing? And insourcing?
  • How has your risk profile changed?
  • How does data allow you to focus on your core business strategy?

Answering these questions has more enterprises looking to partner with, and learn from, cloud experts – as opposed to just receiving services. Organizations want and need to work alongside cloud partners to close the skills gap within their enterprise, gain skills for internal expansion in the future, and better understand how virtualized resources can improve their business. It’s also a way to invest in your employees to reduce turn-over and encourage long-term loyalty.

Security and compliance

At this point with security, compliance, and ensuring business continuity, enterprises must have solutions in place. There is no other way. Ransomware and phishing attacks have been rising in sophistication and frequency year-over-year, with a noticeable spike since remote work became mainstream. Not only does your internal team need constant training and regular enforcement of governance policies, but there’s a larger emphasis on how your network protections are set up.

Regardless of automation and controls, people will make mistakes and there is an inherent risk in any human activity. In fact, human error is the leading cause of data loss with approximately 88% of all data breaches caused by an employee mistake. Unfortunately, the possibility of a breaches is often made possible because of your internal team. Typically, it’s the manner in which the cloud is configured or architected that creates a loophole for bad actors. It’s not that the public cloud isn’t secure or compliant, it’s that it’s not set up properly. This is where many enterprises are outsourcing data protection to avoid damaging compliance penalties, guarantee uninterrupted business continuity, and maintain the security of sensitive data after malicious or accidental deletion, natural disaster, or in the event that a device is lost, stolen or damaged.

Next steps: Think about day two

Enterprises who think of cloud migration as a one-and-done project – we were there, and now we’re here – aren’t ready to make the move. The cloud is not the answer. The cloud is an enabler to help organizations get the answers necessary to move in the direction they desire. There are risks associated with moving to the cloud – tools can distract from goals, system platforms need support, load balancers have to be implemented, and the cloud has to be leveraged and optimized to be beneficial long-term. Without strategizing past the migration, you won’t get the anticipated results.

It can seem overwhelming to take on the constantly changing cloud (and it certainly can be), but you don’t have to do it alone! Keep up with the pace and innovation of the digital transformation, while focusing on what you do best – growing your enterprise – by letting the experts help. 2nd Watch has a team of trusted cloud advisors to help you navigate cloud complexities for successful and ongoing cloud modernization. As an Amazon Web Services (AWS) Premier Partner, a Microsoft Azure Gold Partner, and a Google Cloud Partner with over 10 years’ experience, 2nd Watch provides ongoing advisory services to some of the largest companies in the world. Contact Us to take the next step in your cloud journey!

-Michael Elliott, Director of Marketing


Standardizing & Automating Infrastructure Development Processes

Introduction

Let’s start with a small look at the current landscape of technology and how we arrived here. There aren’t very many areas of tech that have not been, or are not currently, in a state of fluctuation. Everything from software delivery vehicles and development practices, to infrastructure creation has experienced some degree of transformation over the past several years. From VMs to Containers, it seems like almost every day the technology tool belt grows a little bigger, and our world gets a little better (though perhaps more complex) due to these advancements. For me, this was incredibly apparent when I began to delve into configuration management which later evolved into what we now call “infrastructure as code”.

The transformation of the development process began with simple systems that we once used to manage a few machines (like bash scripts or Makefiles) which then morphed into more complex systems (CF Engine, Puppet, and Chef) to manage thousands of systems. As configuration management software became more mature, engineers and developers began leaning on them to do more things. With the advent of hypervisors and the rise of virtual machines, it was only a short time before hardware requests changed to API requests and thus the birth of infrastructure as a service (IaaS). With all the new capabilities and options in this brave new world, we once again started to lean on our configuration management systems—this time for provisioning, and not just convergence.

Provisioning & Convergence

I mentioned two terms that I want to clarify; provisioning and convergence. Say you were a car manufacturer and you wanted to make a car. Provisioning would be the step in which you request the raw materials to make the parts for your automobile. This is where we would use tools like Terraform, CloudFormation, or Heat. Whereas convergence is the assembly line by which we check each part and assemble the final product (utilizing config management software).

By and large, the former tends to be declarative with little in the way of conditionals or logic, while the latter is designed to be robust and malleable software that supports all the systems we run and plan on running. This is the frame for the remainder of what we are going to talk about.

By separating the concerns of our systems, we can create a clear delineation of the purpose for each tool so we don’t feel like we are trying to jam everything into an interface that doesn’t have the most support for our platform or more importantly our users. The remainder of this post will be directed towards the provisioning aspect of configuration management.

Standards and Standardization

These are two different things in my mind. Standardization is extremely prescriptive and can often seem particularly oppressive to professional knowledge workers, such as engineers or developers. It can be seen as taking the innovation away from the job. Whereas standards provide boundaries, frame the problem, and allow for innovative ways of approaching solutions. I am not saying standardization in some areas is entirely bad, but we should let the people who do the work have the opportunity to grow and innovate in their own way with guidance. The topic of standards and standardization is part of a larger conversation about culture and change. We intend to follow up with a series of blog articles relating to organizational change in the era of the public cloud in the coming weeks.

So, let’s say that we make a standard for our new EC2 instances running Ubuntu. We’ll say that all instances must be running the la official Canonical Ubuntu 14.04 AMI and must have these three tags; Owner, Environment, and Application. How can we enforce that in development of our infrastructure? On AWS, we can create AWS Config Rules, but that is reactive and requires ad-hoc remediation. What we really want is a more prescriptive approach bringing our standards closer to the development pipeline. One of the ways I like to solve this issue is by creating an abstraction. Say we have a terraform template that looks like this:

# Create a new instance of the la Ubuntu 14.04 on an
provider "aws" { region = "us-west-2"
}

data "aws_ami" "ubuntu" { most_recent = true

filter {
name	= "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}

filter {
name	= "virtualization-type" values = ["hvm"]
}

owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" { ami	=
"${data.aws_ami.ubuntu.id}" instance_type = "t2.micro"

tags {
Owner	= "DevOps Ninja" Environment = "Dev" Application = "Web01"
}
}

This would meet the standard that we have set forth, but we are relying on the developer or engineer to adhere to that standard. What if we enforce this standard by codifying it in an abstraction? Let’s take that existing template and turn it into a terraform module instead.

Module

# Create a new instance of the la Ubuntu 14.04 on an

variable "aws_region" {} variable "ec2_owner" {} variable "ec2_env" 
{} variable "ec2_app" {}
variable "ec2_instance_type" {}

provider "aws" {
region = "${var.aws_region}"
}

data "aws_ami" "ubuntu" { most_recent = true

filter {
name	= "name" values =
["ubuntu/images/hvm-ssd/ubuntu-trusty-1 4.04-amd64-server-*"]
}

filter {
name	= "virtualization-type" values = ["hvm"]
}

owners = ["099720109477"] # Canonical
}

resource "aws_instance" "web" { ami	=
"${data.aws_ami.ubuntu.id}" instance_type =
"${var.ec2_instance_type}"

tags {
Owner	= "${var.ec2_owner}" Environment = "${var.ec2_env}" Application = 
"${var.ec2_app}"
}
}

Now we can have our developers and engineers leverage our tf_ubuntu_ec2_instance module.

New Terraform Plan

module "Web01" { source =
"git::ssh://git@github.com/SomeOrg/tf_u buntu_ec2_instance"

aws_region = "us-west-2" ec2_owner = "DevOps Ninja" ec2_env	= "Dev"
ec2_app	= "Web01"
}

This doesn’t enforce the usage of the module, but it does create an abstraction that provides an easy way to maintain standards without a ton of overhead, it also provides an example for further creation of modules that enforce these particular standards.

This leads us into another method of implementing standards but becomes more prescriptive and falls into the category of standardization (eek!). One of the most underutilized services in the AWS product stable has to be Service Catalog.

AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog allows you to centrally manage commonly deployed IT services, and helps you achieve consistent governance and meet your compliance requirements, while enabling users to quickly deploy only the approved IT services they need.

The Interface

Once we have a few of these projects in place (e.g. a service catalog or a repo full of composable modules for infrastructure that meet our standards) how do we serve them out? How you spur adoption of these tools and how they are consumed can be very different depending on your organization structure. We don’t want to upset workflow and how work gets done, we just want it to go faster and be more reliable. This is what we talk about when we mention the interface. Whichever way work flows in, we should supplement it with some type of software or automation to link those pieces of work together. Here are a few examples of how this might look (depending on your organization):

1.) Central IT Managed Provisioning

If you have an organization that manages requests for infrastructure, having this new shift in paradigm might seem daunting. The interface in this case is the ticketing system. This is where we would create an integration with our ticketing software to automatically pull the correct project from service catalog or module repo based on some criteria in the ticket. The interface doesn’t change but is instead supplemented by some automation to answer these requests, saving time and providing faster delivery of service.

2.) Full Stack Engineers

If you have engineers that develop software and the infrastructure that runs their applications this is the easiest scenario to address in some regards and the hardest in others. Your interface might be a build server, or it could simply be the adoption of an internal open source model where each team develops modules and shares them in a common place, constantly trying to save time and not re-invent the wheel.

Supplementing with software or automation can be done in a ton of ways. Check out an example Kelsey Hightower wrote using Jira.

“A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” – John Gall

All good automation starts out with a manual and well-defined process. Standardizing & automating infrastructure development processes begins with understanding how our internal teams can be best organized.  This allows us to efficiently perform work before we can begin automating. Work with your teammates to create a value stream map to understand the process entirely before doing anything towards the efforts of automating a workflow.

With 2nd Watch designs and automation you can deploy quicker, learn faster and modify as needed with Continuous Integration / Continuous Deployment (CI/CD). Our Workload Solutions transform on-premises workloads to digital solutions in the public cloud with next generation products and services.  To accelerate your infrastructure development so that you can deploy faster, learn more often and adapt to customer requirements more effectively, speak with a 2nd Watch cloud deployment expert today.

– Lars Cromley, Director of Engineering, Automation, 2nd Watch