How to Choose the Best Cloud Service Provider for your Application Modernization Strategy

If the global pandemic taught us anything, it’s that digital transformation is a must-have for businesses to keep up with customer dCloud Service Provider for App Modernization Strategyemands and remain competitive. To do this, organizations are moving their workloads to and modernizing their applications for the cloud faster than ever.

In fact, according to a recent survey, 91% of respondents agree or strongly agree that application modernization plays a critical role in their organization’s adaptability to rapidly changing business conditions. But there are so many cloud service providers to choose from! How do you know which one is best for your application modernization objectives? Keep reading to find out!  

What is a Cloud Services Provider (CSP)? 

A cloud services provider is a cloud computing company that provides public clouds, managed private clouds, or on-demand cloud infrastructures, platforms, and services. Many CSPs are available worldwide, including Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud, Oracle Cloud, and Microsoft Azure. However, three industry giants are noteworthy because of their services and global footprint: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. 

What is Application Modernization? 

Application modernization is the process of revamping an application to take advantage of breakthrough technical innovations to improve the overall efficiency of the application remarkably. This efficiency typically involves high availability, increased fault tolerance, high scalability, improved security, eliminating a single point of failure, disaster recovery, contemporary and simplified tools, new coding language, and reduced resource requirements, among other benefits. Many companies running legacy applications are now looking at how they can best modernize their monolith applications. 

Application Rationalization: The First Step to Modernization 

The best way to start any application modernization journey is with application rationalization. In this process, you identify company-wide business applications and strategically determine which ones you should keep, replace, retire, or consolidate. Once you identify those applications, you can list each one’s ease or difficulty level, total cost of ownership (TCO), and business value, enabling you to decide and prioritize which action to take. (Hint: Start with high value and minimal effort apps!) Doing this will also help you eliminate redundancies, lower costs, and maximize efficiency. 

The high-value apps that are difficult to move to the cloud will likely cause the most grief in your decision-making process. But, like Rome, your modernization strategy doesn’t need to be built in a day.

You can develop an approach to application modernization over time and still reduce costs and risks while moving your portfolio forward.  

It is crucial to evaluate your current application stack and determine the most suitable application modernization strategy to migrate to the cloud when it comes to application modernization in the cloud. Many on-premises applications are legacy monoliths that may benefit more from refactoring than a rehosting (“lift and shift”) approach. (Check out Rehost, Refactor, Replatform – What, When, & Why? | AppMod Essentials) 

Refactoring may require overhauling your application code, which takes some high-level effort but offers the most benefits. However, not all applications are ideal candidates for refactoring. Rearchitecting will become necessary for some obsolete applications that are not compatible with the cloud due to architectural designs made while building the app. In this scenario, the value proposition considers rearchitecting, dividing the application into several functional components that can be individually adapted and further developed. These small, independent pieces—or “microservices”—can then be migrated to the cloud quickly and efficiently. 

Determining the Best Cloud Services Provider for Your Application Modernization 

Each application modernization journey is unique, as is the process of choosing the best cloud service provider that meets your demands. What works for one business’ application may not be the best for yours, even if they are in the same industry. And just because a competitor has chosen one CSP over another does not mean you should. 

When evaluating the CSP that is best for you, consider the following: 

Service Level Agreements (SLAs): Determine if the CSP’s service level agreements suit your production workloads, whether the cloud service is generally available yet, and they retain satisfactory levels of support knowledge. Managing workloads in the cloud can sometimes be tedious. The managed services department may not have the required expertise to efficiently manage and monitor the cloud environment. It is critical to your business to do your due diligence to ensure your preferred CSP can administer their managed offerings with as close to zero downtime as possible. 

  • Vendor Lock-in: It is important to have alternatives to any single CSP and that you retain the flexibility to substitute for a better value proposition. 
  • Enterprise Adoption: Consider the likelihood of scalability of your use of the CSP across your organization. 
  • Economic Impact: Consider the positive business or financial impacts that result from the service usage at the individual, department, and company-wide levels. 
  • Automation and Deployment: Verify the CSP’s integration capabilities with your organization’s preferred automation tooling and availability of automated and local testing frameworks.  

CSP Application Modernization Design Considerations 

When modernizing existing applications to take the best advantage of the cloud, cloud technologies like serverless and containers are good options to consider. Serverless computing and containers are cloud-native tools that automate code deployment into isolated environments. Developers can build highly scalable applications with fewer resources within a short time. They both also reduce overhead for cloud-hosted web applications but differ in many ways. Private cloud, hybrid cloud, and multi-cloud approaches to application modernization are worth considering too. 

Serverless Computing and Containers 

Serverless computing is an exaction model where the CSP executes a piece of code by dynamically allocating the resources and can only charge for the services used to run the code. Code is typically run in stateless containers. Various events such as HTTP requests, monitoring alerts, database events, queuing services, file uploads, scheduled events (cron jobs), and more can trigger them.

 

The cloud service provider then receives the code in a function to execute, which is why serverless computing is sometimes referred to as a Function-as-a-Service (FaaS) platform. Add that to your list of as-a-Service acronyms: IaaS, PaaS, SaaS, FaaS!   

The FaaS offerings of the three major CSPs are: 

Containers provide a discrete environment set up within an operating system. They can run one or more applications, typically assigned only those resources necessary for the application to function correctly. Because containers are smaller and faster than virtual machines, they allow applications to run quickly and reliably among various computing environments. Container images become containers at runtime and include everything needed to run an application: code, runtime, system tools, system libraries, and settings. 

Private, Hybrid, and Multi-Cloud 

The public cloud is a vital part of any modernization strategy. However, some organizations may not be ready to go directly to the public cloud from the datacenter. Cloud architects should consider private, hybrid, and multi-cloud strategies in those cases. These models can help resolve any architectural, security, or latency concerns. They will also reduce the complexity associated with the policies for specific workloads based on their unique characteristics.  

Conclusion 

Migration to the cloud is ideal for investing in application modernization as it can lower your overall operational costs and increase your application’s resiliency. But not all use cases—nor cloud service providers—are the same. You need to do your homework before choosing the best-suited one for your business.  

2nd Watch offers a comprehensive consulting methodology and proven tools to accelerate your cloud-native and app modernization objectives. Our modernization process begins with a complete assessment of your existing application portfolio to identify which you should keep, replace, retire, or consolidate. We then develop and implement a modernization strategy that best meets your business needs.

From application rationalization to application modernization and beyond, 2nd Watch is your go-to trusted advisor throughout your entire modernization journey. 

Contact us to schedule a brief meeting with our specialists to discuss your current modernization objectives. 

By Alex Ifebigh, 2nd Watch Sr. Cloud Consultant 

 


2nd Watch Enhances Managed Optimization service in partnership with Spot by NetApp

Today, we’re excited to announce a new enhancement to our Managed Optimization service – Spot Instance and Container Optimization – for enterprise IT departments looking to more thoughtfully allocate cloud resources and carefully manage cloud spend.

Enterprises using cloud infrastructure and services today are seeing higher cloud costs than anticipated due to factors such as cloud sprawl, shadow IT, improper allocation of cloud resources, and a failure to use the most efficient resource based on workload. To address these concerns, we take a holistic approach to Optimization and have partnered with Spot by NetApp to enhance our Managed Optimization service.

The service works by recommending workloads that can take advantage of the cost savings associated with running instances, VMs and containers on “spot” resources. A spot resource is an unused cloud resource that is available for sale in a marketplace for less than the on-demand price. Because spot resources enable users to request unused EC2 instances or VMs to run their workloads at steep discounts, users can significantly lower their cloud compute costs, up to 90% by some measures. To deliver its service, we’re partnering with Spot, whose cloud automation and optimization solutions help companies maximize return on their cloud investments.

“Early on, spot resources were difficult to manage, but the tasks associated with managing them can now be automated, making the use of spot a smart approach to curbing cloud costs,” says Chris Garvey, EVP of Product at 2nd Watch. “Typically, non-mission critical workloads such as development and staging have been able to take advantage of the cost savings of spot instances.

By combining 2nd Watch’s expert professional services, managed cloud experience and solutions from Spot by NetApp, 2nd Watch has been able to help companies use spot resources to run production environments.”

“Spot by NetApp is thrilled to be working with partners like  2nd Watch to help customers maximize the value of their cloud investment,” says Amiram Shachar, Vice President and General Manager of Spot by NetApp.  “Working together, we’re helping organizations go beyond one-off optimization projects to instead ensure continuous optimization of their cloud environment using Spot’s unique technology. With this new offering, 2nd Watch demonstrates a keen understanding of this critical customer need and is leveraging the best technology in the market to address it.”


Using Docker Containers to Move Your Internal IT Orgs Forward

Many people are looking to take advantage of containers to isolate their workloads on a single system. Unlike traditional hypervisor-based virtualization, which utilizes the same operating system and packages, Containers allow you to segment off multiple applications with their own set of processes on the same instance.

Let’s walk through some grievances that many of us have faced at one time or another in our IT organizations:

Say, for example, your development team is setting up a web application. They want to set up a traditional 3 tier system with an app, database, and web servers. They notice there is a lot of support in the open source community for their app when it is run on Ubuntu Trusty (Ubuntu 14.04 LTS) and later. They’ve developed the app in their local sandbox with an Ubuntu image they downloaded, however, their company is a RedHat shop.

Now, depending on the type of environment you’re in, chances are you’ll have to wait for the admins to provision an environment for you. This often entails (but is not limited to) spinning up an instance, reviewing the most stable version of the OS, creating a new hardened AMI, adding it to Packer, figuring out which configs to manage, and refactoring provisioning scripts to utilize aptitude and Ubuntu’s directory structure (e.g Debian has over 50K packages to choose from and manage). In addition to that, the most stable version of Ubuntu is missing some newer packages that you’ve tested in your sandbox that need to be pulled from source or another repository. At this point, the developers are procuring configuration runbooks to support the app while the admin gets up to speed with the OS (not significant but time-consuming nonetheless).

You can see my point here. A significant amount of overhead has been introduced, and it’s stagnating development. And think about the poor sysadmins. They have other environments that they need to secure, networking spaces to manage, operations to improve, and existing production stacks they have to monitor and support while getting bogged down supporting this app that is still in the very early stages of development. This could mean that mission-critical apps are potentially losing visibility and application modernization is stagnating. Nobody wins in this scenario.

Now let us revisit the same scenario with containers:

I was able to run my Jenkins build server and an NGINX web proxy, both running on a hardened CentOS7 AMI provided by the Systems Engineers with docker installed.  From there I executed a docker pull  command pointed at our local repository and deployed two docker images with Debian as the underlying OS.

$ docker pull my.docker-repo.com:4443/jenkins
$ docker pull my.docker-repo.com:4443/nginx

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

$ docker ps

7478020aef37 my.docker-repo.com:4443/jenkins/jenkins:lts   “/sbin/tini — /us …”  16 minutes ago   Up 16 minutes ago  8080/tcp, 0.0.0.0:80->80/tcp, 50000/tcp jenkins

d68e3b96071e my.docker-repo.com:4443/nginx/nginx:lts “nginx -g ‘daemon of…” 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp nginx

$ sudo systemctl status jenkins-docker

jenkins-docker.service – Jenkins
Loaded: loaded (/etc/systemd/system/jenkins-docker.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-11-08 17:38:06 UTC; 18min ago
Process: 2006 ExecStop=/usr/local/bin/jenkins-docker stop (code=exited, status=0/SUCCESS)

The processes above were executed on the actual instance. Note how I’m able to execute a cat of the OS release file from within the container

sudo docker exec d68e3b96071e cat /etc/os-release
PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”
NAME=”Debian GNU/Linux”
VERSION_ID=”9″
VERSION=”9 (stretch)”
ID=debian
HOME_URL=”https://www.debian.org/
SUPPORT_URL=”https://www.debian.org/support
BUG_REPORT_URL=”https://bugs.debian.org/

I was able to do so because Docker containers do not have their own kernel, but rather share the kernel of the underlying host via linux system calls (e.g setuid, stat, umount, ls) like any other application. These system calls (or syscalls for short) are standard across kernels, and Docker supports version 3.10 and higher. In the event older syscalls are deprecated and replaced with new ones, you can update the kernel of the underlying host, which can be done independently of an OS upgrade. As far as containers go, the binaries and aptitude management tools are the same as if you installed Ubuntu on an EC2 instance (or VM).

Q: But I’m running a windows environment. Those OS’s don’t have a kernel. 

Yes, developers may want to remove cost overhead associated with Windows licenses by exploring running their apps on Linux OS. Others may simply want to modernize their .NET applications by testing out the latest versions on Containers. Docker allows you to run Linux VM’s on Windows 10 and Server 2016. As docker was written to initially execute on Linux distributions, in order to take advantage of multitenant hosting, you will have to run Hyper-V containers, which provision a thin VM on top of your hosts. You can then manage your mixed environment of Windows and Linux hosts via the –isolate option. More information can be found in the Microsoft and Docker documentation.

Conclusion:

IT teams need to be able to help drive the business forward. Newer technologies and security patches are procured on a daily basis. Developers need to be able to freely work on modernizing their code and applications. Concurrently, Operations needs to be able to support and enhance the pipelines and platforms that get the code out faster and securely. Leveraging Docker containers in conjunction with these pipelines further helps to ensure these are both occurring in parallel without the unnecessary overhead. This allows teams to work independently in the early stages of the development cycle and yet more collaboratively to get the releases out the door.

For help getting started leveraging your environment to take advantage of containerization, contact us.

-Sabine Blair, Systems Engineer & Cloud Consultant


AWS re:Invent 2018: Product Reviews & Takeaways

Interesting Takeaways

AWS re:Invent always has new product launches. The “new toys” are usually the ones that catch the most coverage, but there are a few things we feel are quite interesting coming out of re:Invent 2018 and decided they’d fit in their own section. Some are new products or additions to old products and some are based on the conversations or sessions heard around the event. Read on for our take on things!

AWS Marketplace for Containers

Announced at the Global Partner Summit keynote, the AWS Marketplace for Containers is the next logical step in the Marketplace ecosystem. Vendors will now be able to offer container solutions for their products, just as they do with AWS EC2 AMIs. The big takeaway here is just how important containerization is and how much of a growth we see in the implementation of containerized products and serverless architectures in general. Along with the big announcements around AWS Lambda, this just solidifies the push in the industry to adopt serverless models for their applications.

AWS Marketplace – Private Marketplace

The AWS Marketplace has added the Private Marketplace to its feature set. You can now have your own marketplace that’s shared across your AWS Organizations. This is neat and all, but I think what’s even more interesting is what it hints at in the background. It seems to me that in order to have a well established marketplace at all, your organization is going to need to be journeying on that DevOps trail: smaller teams who own and deploy focused applications (in this case, internally). I think it shows that a good deployment pipeline is really the best way to handle a project, regardless if it’s for external customers or internal customers.

Firecracker

This looks really cool. Firecracker is a virtualization tool that is built specifically for microVMs and function-based services (like Lambda or Fargate). It runs on bare metal… wait, what? I thought we’re trying to move AWAY from our own hosted servers?! That’s true, and I’ll be honest, I don’t think many of our customers will be utilizing it. However, consider all the new IoT products and features that were announced at the conference and you’ll see there’s still a lot of bare metal, both in use AND in development! I don’t think Firecracker is meant solely for large server farm type setups, but quite possibly for items in the IoT space. The serverless / microservice architecture is a strong one, and this allows that to happen in the IoT space. I’m currently working on installing it onto my kids’ minecraft micro computer. Do I smell another blog post?

Andy Jassy Says What?

In the fireside chat with Andy Jassy in the partner keynote, there were several things I found interesting, albeit not surprising (moving away from Oracle DB), but there was one that stood out above the rest:

I hear enterprises, all the time, wanting help thinking about how they can innovate at a faster clip. And, you know, it’s funny, a lot of the enterprise EBC’s I get to be involved in… I’d say roughly half the content of those are enterprises asking me about our offering and how we think about our business and what we have planned in the future, but a good chunk of every one of those conversations are enterprises trying to learn how we move quickly and how we invent quickly, and I think that enterprises realize that in this day and age if you are not reinventing fast and iterating quickly on behalf of your customers, it’s really difficult to be competitive. And so I think they want help from you in how to invent faster. Now, part of that is being able to operate on top of the cloud and operate on top of a platform like AWS that has so many services that you can stitch together however you see fit. Some of it also is, how do people think about DevOps? How do people think about organizing their teams? You know… what are the right constraints that you have but that still allow people to move quickly.

He said DevOps! So larger companies that are looking to change don’t just want fancy tools and fancy technology, but they also need help getting better at affecting change. That’s absolutely outside the wheelhouse of AWS, but I think it’s very interesting that he specifically called that out, and called it out during the partner keynote. If you’re interested in learning more about any of these announcements, contact us.

-Lars Cromley, Director of Engineering