Datacenter Migration to the Cloud: Why Your Business Should Do it and How to Plan for it

Datacenter migration is ideal for businesses who are looking to exit or reduce on-premises datacenters, migrate workloads as is, modernize apps, or leave another cloud. Executing migrations, however, is no small task, and as a result, there are many enterprise workloads that still run in on-premises datacenters. Often technology leaders want to migrate more of their workloads and infrastructure to a private or public cloud, but they are turned off by the seemingly complex processes and strategies involved in cloud migration or lack the internal cloud skills necessary to make the transition.

 

Though datacenter migration can be a daunting business initiative, the benefits of moving to the cloud are well worth the effort, and the challenges of the migration process can be mitigated by creating a strategy, using the correct tools, and utilizing professional services. Datacenter migration provides a great opportunity to revise, rethink, and improve an organization’s IT architecture. It also ultimately impacts business-critical drivers such as reducing capital expenditure, decreasing ongoing cost, improving scalability and elasticity, improving time-to-market, enacting digital transformation, and attaining improvements in security and compliance.

What are Common Datacenter Migration Challenges?

To ensure a seamless and successful migration to the cloud, businesses should be aware of the potential complexities and risks associated with a datacenter migration. The complexities and risks are addressable, and if addressed properly, organizations can create not only an optimal environment for their migration project, but provide the launch point for business transformation.

Not Understanding Workloads

While cloud platforms are touted as flexible, it is a service-oriented resource and should be treated as such. To be successful in cloud deployment, organizations need to be aware of performance, compatibility, performance requirements (including hardware, software, and IOPS), required software, and adaptability to changes in their workloads. Teams need to run their cloud workloads on the cloud service that is best aligned with the needs of the application and the business.

Not Understanding Licensing

Cloud marketplaces allow businesses to easily “rent” software at an hourly rate. Though the ease of this purchase is enticing, it’s important to remember that it’s not the only option out there. Not all large vendors offer licensing mobility for all applications outside the operating system. In fact, companies should leverage existing relationships with licensing brokers. Just because a business is migrating to the cloud doesn’t mean that a business should abandon existing licensing channels. Organizations should familiarize themselves with their choices for licensing to better maximize ROI.

Not Looking for Opportunities to Incorporate PaaS

Platform as a service (PaaS) is a cloud computing model where a cloud service provider delivers hardware and software tools to users over the internet versus a build-it-yourself Infrastructure as a Service (IaaS) model. The PaaS provider abstracts everything—servers, networks, storage, operating system software, databases, development tools—enabling teams to focus on their application. This enables PaaS customers to build, test, deploy, run, update and scale applications more quickly and inexpensively than they could if they had to build out and manage an IaaS environment on top of their application. While businesses shouldn’t feel compelled to rewrite all their network configurations and operating environments, they should see where they can have quick PaaS wins to replace aging systems.

Not Proactively Preparing for Cloud Migration

Building a new datacenter is a major IT event and usually goes hand-in-hand with another significant business event, such as an acquisition, or outgrowing the existing datacenter. In the case of moving to a new on-premises datacenter, the business will slow down as the company takes on a physical move. Migrating to the cloud is usually not coupled with an eventful business change, and as a result, business does not stop when a company chooses to migrate to the cloud. Therefore, a critical part of cloud migration success is designing the whole process as something that can run along with other IT changes that occur on the same timeline. Application teams frequently adopt cloud deployment practices months before their systems actually migrate to the cloud. By doing so, the team is ready before their infrastructure is even prepared, which makes cloud migration a much smoother event. Combining cloud events with other changes in this manner will maximize a company’s ability to succeed.

Treating and Running the Cloud Environment Like Traditional Datacenters

It seems obvious that cloud environments should be treated differently from traditional datacenters, but this is actually a common pitfall for organizations to fall in. For example, preparing to migrate to the cloud should not include traditional datacenter services, like air conditioning, power supply, physical security, and other datacenter infrastructure, as a part of the planning. Again, this may seem very obvious, but if a business is used to certain practices, it can be surprisingly difficult to break entrenched mindsets and processes.

How to Plan for a Datacenter Migration

While there are potential challenges associated with datacenter migration, the benefits of moving from physical infrastructures, enterprise datacenters, and/or on-premises data storage systems to a cloud datacenter or a hybrid cloud system is well worth the effort.

Now that we’ve gone over the potential challenges of datacenter migration, how do businesses enable a successful datacenter migration while effectively managing risk?

Below, we’ve laid out a repeatable high-level migration strategy that is broken down into four phases: Discovery, Planning, Execution, and Optimization. By leveraging a repeatable framework as such, organizations create the opportunity to identify assets, minimize migration costs and risks using a multi-phased migration approach, enable deployment and configuration, and finally, optimize the end state.

Phase 1: Discovery

During the Discovery phase, companies should understand and document the entire datacenter footprint. This means understanding the existing hardware mapping, software applications, storage layers (databases, file shares), operating systems, networking configurations, security requirements, models of operation (release cadence, how to deploy, escalation management, system maintenance, patching, virtualization, etc.), licensing and compliance requirements, as well as other relevant assets.

The objective of this phase is to have a detailed view of all relevant assets and resources of the current datacenter footprint.

The key milestones in the Discovery phase are:

  • Creating a shared datacenter inventory footprint: Every team and individual who is a part of the datacenter migration to the cloud should be aware of the assets and resources that will go live.
  • Sketching out an initial cloud platform foundations design: This involves identifying centralized concepts of the cloud platform organization such as folder structure, Identity and Access Management (IAM)  model, network administration model, and more.

As a best practice, companies should engage in cross-functional dialogue within their organizations, including teams from IT to Finance to Program Management, ensuring everyone is aligned on changes to support future cloud processes. Furthermore, once a business has migrated from a physical datacenter to the cloud, they should consider whether their datacenter team is trained to support the systems and infrastructure of the cloud provider.

Phase 2: Planning

When a company is entering the Planning phase, they are leveraging the assets and deliverables gathered in the Discovery phase to create migration waves to be sequentially deployed into non-production and production environments.

Typically, it is best to target non-production migration waves first, which helps identify the sequence of waves to migrate first. To start, consider the following:

  • Mapping the current server inventory to the cloud platform’s machine types: Each current workload will generally run on a virtual machine type with similar computing power, memory, and disk. Oftentimes though, the current workload is overprovisioned, so each workload should be evaluated to ensure that it is migrated onto the right VM for that given workload.
  • Timelines: Businesses should lay out their target dates for each migration project.
  • Workloads in each grouping: Figure out what migration waves are grouped by i.e. non-production vs. production applications.
  • The cadence of code releases: Factor in any upcoming code releases as this may impact the decision of whether to migrate sooner or later.
  • Time for infrastructure deployment and testing: Allocate adequate time for testing infrastructures before fully moving over to the cloud.
  • The number of application dependencies: Migration order should be influenced by the number of application dependencies. The applications with the fewest dependencies are generally good candidates for migration first. In contrast, wait to migrate an application that depends on multiple databases.
  • Migration complexity and risk: Migration order should also take complexity into consideration. Tackling simpler aspects of the migration first will generally yield a more successful migration.

As mentioned above, the best practice for migration waves is to start with more predictable and simple workloads. For instance, companies should start with migrating file shares first, then databases and domain controlled, and save the apps for last. However, sometimes the complexity and dependencies don’t allow for a straightforward migration. In these cases, utilizing an experienced service provider who has experience with these complex environments will be prudent.

Phase 3: Execution

Once companies have developed a plan, they can bring them to fruition in the Execution phase. Here, businesses will need to be deliberate about the steps they take and the configurations they develop.

In the Execution phase, companies will put into place infrastructure components and ensure they are configured appropriately, like IAM, networking, firewall rules, and Service Accounts. Here is also where teams should test the applications on the infrastructure configurations to ensure that they have access to their databases, file shares, web servers, load balancers, Active Directory servers, and more. Execution also includes using logging and monitoring to ensure applications continue to function with the necessary performance.

In order for the Execution phase to be successful, there needs to be agile application debugging and testing. Moreover, organizations should have both a short and long-term plan for resolving blockers that may come up during the migration. The Execution phase is iterative and the goal should be to ensure that applications are fully tested on the new infrastructure.

Phase 4: Optimization

The last phase of a datacenter migration project is Optimization. After a business has migrated its workloads to the cloud, it should conduct periodic reviews and planning to optimize the workloads. Optimization includes the following activities:

  • Resizing machine types and disks
  • Leveraging software like Terraform for more agile and predictable deployments
  • Improving automation to reduce operational overhead
  • Bolstering integration with logging, monitoring, and alerting tools
  • Adopting managed services to reduce operational overhead

Cloud services provide visibility into resource consumption and spending, and organizations can more easily identify the compute resources they are paying for. Additionally, businesses can identify virtual machines they need or don’t need. By migrating from a traditional datacenter environment to a cloud environment, teams will be able to optimize their workloads due to the powerful tools that cloud platforms provide.

How do I take the first step in datacenter migration?

While undertaking a full datacenter migration is a significant project, it is worthwhile. The migration framework we’ve provided can help any business break down the process into manageable stages and move fully to the cloud.

When you’re ready to take the first step, we’re here to help to make the process even easier. Contact a 2nd Watch advisor today to get started with your migration to the cloud.

 


Ransomware Attack Leaves Some Companies WannaCrying Over Technical Debt

The outbreak of a virulent strain of ransomware, alternately known as WannaCry or WannaCrypt, is finally winding down. A form of malware, the WannaCry attack exploited certain vulnerabilities in Microsoft Windows and infected hundreds of thousands of Windows computers worldwide.  As the dust begins to settle, the conversation inevitably turns to what could have been done to prevent it.

The first observation is that most organizations could have been protected simply by following best practices—most notably, the regular installation of known security and critical patches that help to minimize vulnerabilities. WannaCry was not an exotic “zero day” incident. The patch for the underlying vulnerabilities (MS17-010) has been available since March. Companies like 2nd Watch maintain a regular patch schedule to protect their systems from these and similar attacks. It should be noted that due to the prolific nature of this malware and the active attack vectors, 2nd Watch is requiring that all Windows systems must be patched by 5/31/2017.

Other best practices include:

  • Maintaining support contracts for out-of-date operating systems
  • Enabling firewalls, in addition to intrusion detection and prevention systems
  • Proactively monitoring and validating traffic going in and out of the network
  • Implementing security mechanisms for other points of entry attackers can use, such as email and websites
  • Deploying application control to prevent suspicious files from executing in addition to behavior monitoring that can thwart unwanted modifications to the system
  • Employing data categorization and network segmentation to mitigate further exposure and damage to data
  • Backing up important data. This is the single, most effective way of combating ransomware infection. However, organizations should ensure that backups are appropriately protected or stored off-line so that attackers can’t delete them.

 

The importance of regularly scheduled patching and keeping systems up-to-date cannot be overemphasized. It may not be sexy, but it is highly effective.

All of these recommendations seem simple enough, but why did the outbreak spread so quickly if the vulnerabilities were known and patches were readily available? It spread because the patches were released for currently supported systems, but the vulnerability has been present in all versions of Windows dating back to Windows XP. For these older systems – no longer supported by Microsoft but still widely used – the patches weren’t there in the first place. One of the highest profile victims, Britain’s National Health Service, discovered that 90 percent of NHS trusts run at least one Windows XP device, an operating system Microsoft first introduced in 2001 and hasn’t supported since 2014. In fact, it was only because of the high-profile nature of this malware that Microsoft took the rare step this week of publishing a patch for Windows XP, Windows Server 2003 and Windows 8.

This brings us to the challenging topic of “technical debt”—the extra cost and effort to continue using older technology. The WannaCry/WannaCrypt outbreak is simply the most recent teachable moment about those costs.

A big benefit of moving to cloud computing is its ability to help rid one’s organization of technical debt. By migrating workloads into the cloud, and even better, by evolving those workloads into modern, cloud-native architectures, the issue of supporting older servers and operating systems is minimized. As Gartner pointed out in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide, through 2018, the cloud managed service market will remain relatively immature, and more than 75% of fully successful implementations will be delivered by highly skilled, forward-looking, boutique managed service providers with a cloud-native, DevOps-centric service delivery approach, like 2nd Watch.  A free download of the report can be found here.

Partners like 2nd Watch can also help reduce your overall management cost by tailoring solutions to manage your infrastructure in the cloud. The best practices mentioned above can be automated in many environments– regular patching, resource isolation, traffic monitoring, etc. – are all done for you so you can focus on your business.

Even more important, companies like 2nd Watch help ensure the ongoing optimization of your workloads, both from a cost and a performance point of view. The life-cycle of optimization and modernization of your cloud environments is perhaps the single grea mechanism to ensure that you never take on and retain high levels of technical debt.

 

-John Lawler, Sr Product Manager