4 Issues in Data Migration from Legacy Systems to Avoid

The scales have finally tipped! According to a Flexera survey, 93% of organizations have a multi-cloud strategy and 53% are now operating with advanced cloud maturity. For those who are now behind the bell curve, it’s a reminder that keeping your data architecture in an on-premises solution is detrimental to remaining competitive. On-prem architecture restricts your performance and the overall growth and complexity of your analytics. Here are some of the setbacks of remaining on-prem and the benefits of data migration from legacy systems.

Looking for the right path to data modernization? Learn about our 60-minute data architecture assessment and how it will get you there.

4 Issues in Data Migration from Legacy Systems to Avoid

Greater Decentralization

For most organizations, data architecture did not grow out of an intentional process. Many on-prem storage systems developed from a variety of events ranging from M&A activity and business expansion to vertical-specific database initiatives and rogue implementations. As a result, they’re often riddled with data silos that prevent comprehensive analysis from a single source of truth.

When organizations conduct reporting or analysis with these limitations, they are at best only able to find out what happened – not predict what will happen or narrow down what they should do. The predictive analytics and prescriptive analytics that organizations with high analytical maturity are able to conduct are only possible if there’s a consolidated and comprehensive data architecture.

Though you can create a single source of data with an on-prem setup, a cloud-based data storage platform is more likely to prevent future silos. When authorized users can access all of the data from a centralized cloud hub, either through a specific access layer or the whole repository, they are less likely to create offshoot data implementations.

Slower Query Performance

The insights from analytics are only useful if they are timely. Some reports are evergreen, so a few hours, days, or even a week doesn’t alter the actionability of the insight all that much. On the other hand, real-time analytics or streaming analytics requires the ability to process high-volume data at low latency, a difficult feat for on-prem data architecture to achieve without enterprise-level funding. Even mid-sized businesses are unable to justify the expense – even though they need the insight available through streaming analysis to keep from falling behind larger industry competitors.

Using cloud-based data architecture enables organizations to access much faster querying. The scalability of these resources allows organizations of all sizes to ask questions and receive answers at a faster rate, regardless of whether it’s real-time or a little less urgent.

Plus, those organizations that end up working with a data migration services partner can even take advantage of solution accelerators developed through proven methods and experience. Experienced partners are better at avoiding unnecessary pipeline or dashboard inefficiencies since they’ve developed effective frameworks for implementing these types of solutions.

More Expensive Server Costs

On-prem data architecture is far more expensive than cloud-based data solutions of equal capacity. When you opt for on-prem, you always need to prepare and pay for the maximum capacity. Even if the majority of your users are conducting nothing more complicated than sales or expense reporting, your organization still needs the storage and computational power to handle data science opportunities as they arise.

All of that unused server capacity is expensive to implement and maintain when the full payoff isn’t continually realized. Also, on-prem data architecture requires ongoing updates, maintenance, and integration to ensure that analytics programs will function to the fullest when they are initiated.

Cloud-based data architecture is far more scalable, and providers only charge you for the capacity you use during a given cycle. Plus, it’s their responsibility to optimize the performance of your data pipeline and data storage architecture – letting you reap the full benefits without all of the domain expertise and effort.

Hindered Business Continuity

There’s a renewed focus on business continuity. The recent pandemic has illuminated the actual level of continuity preparedness worldwide. Of the organizations that were ready to respond to equipment failure or damage to their physical buildings, few were ready to have their entire workforce telecommuting. Those with their data architecture already situated in the cloud fared much better and more seamlessly transitioned to conducting analytics remotely.

The aforementioned accessibility of cloud-based solutions gives organizations a greater advantage over traditional on-prem data architecture. There is limited latency when organizations need to adapt to property damage, natural disasters, pandemic outbreaks, or other watershed events. Plus, the centralized nature of this type of data analytics architecture prevents unplanned losses that might occur if data is stored in disparate systems on-site. Resiliency is at the heart of cloud-based analytics.

It’s time to embrace data migration from legacy systems in your business. 2nd Watch can help! We’re experienced with migration legacy implementations to Azure Data Factory and other cloud-based solutions.

Let’s Start Your Data Migration

rss
Facebooktwitterlinkedinmail

Rehost vs Refactor vs Replatform | AppMod Essentials

Migrating workloads or an application to the cloud can seem daunting for any organization. The cloud is synonymous with industry buzzwords such as DevOps, digital transformation, opensource, and more. As of 2021, AWS has over 200 products and services.

Nowadays, every other LinkedIn post is somehow related to the cloud. Sound familiar? Maybe a bit intimidating? If so, you are not alone! Organizations often hope that operating in the cloud will help them become more agile, enhance business continuity, or reduce technical debt. All of which are achievable in a cloud environment with proper planning. 

AppMod Essentials: Rehost, Refactor, Replatform

Benjamin Franklin once said, “By failing to prepare, you are preparing to fail.” This sentiment is true not only in life but also in technology. Any successful IT project has a strategy and tangible business outcomes. Project managers must establish these before any “actual work” begins. Without this, leadership teams may not know if the project is on task and on schedule. Technical teams may struggle to determine where to start or what to prioritize. Here we’ll explore industry-standard strategies that organizations can deploy to begin their cloud journey and help technical leaders decide which path to take. 

What is Cloud Migration? 

Cloud migration is when an organization decides to move its data, applications, or other IT capabilities into a cloud service provider (CSP) such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Some organizations may decide to migrate all IT assets into the cloud; however, most organizations keep some services on-premises in a hybrid environment for various reasons. Performing a migration to the cloud may consist of multiple CSPs or even a private cloud. 

What Are the Different Strategies for Cloud Migration? 

What Are the Different Strategies for Cloud Migration?Gartner recognizes five cloud migration strategies, nicknamed “The 5Rs.” Individually they are called rehost, refactor, revise (a.k.a. replatform), rebuild, and replace, each with benefits and drawbacks. This blog focuses on three of those five migration approaches—rehost, refactor, and replatform—as they play a significant role in application modernization. 

What is Rehost in the Cloud?

Rehost, or “lift and shift,” is the process of migrating a workload into the cloud as-is without any modifications. Rehosting usually involves infrastructure-as-a-service (IaaS) technologies in a cloud provider such as AWS EC2 or Azure VM’s. Organizations with little cloud experience may consider this strategy because it is an easy start to their cloud journey. Cloud service providers are constantly creating new services for rehosting to make the process even easier. This strategy is less complex, so the timeline to complete a rehost migration can be significantly shorter than other strategies. Organizations often rehost workloads and then modernize after gaining more cloud knowledge and experience. 

Rehosting Pros:

  • No architecture changes – Organizations can migrate workloads as-is, which benefits those with little cloud experience. 
  • Fastest migration method – Rehosting is often the quickest path to the cloud. This method is an excellent advantage for organizations that need to vacate an on-premises data center or colocation. 
  • Organizational changes are not necessary – Organizational processes and strategies to manage workloads can remain the same since architectures are not changing. Organizations will need to learn new tools for the selected cloud provider, but the day-to-day tasks will not change.  

  Rehosting Cons: 

  • High costs – Monthly spending will quickly add up in the cloud without modernizing applications. Organizations must budget appropriately for rehosting migrations. 
  • Lack of innovation – Rehosting does not take advantage of the variety of innovative and modern technologies available in the cloud.  
  • Does not improve the customer experience – Without change, applications cannot improve, which means customers will have a similar experience in the cloud. 

What Refactor Means?

Use the refactoring technique to update and optimize applications for the cloud. Refactoring often involves “app modernization” or updating the application’s existing code to take full advantage of cloud features and flexibility. This strategy can be complex because it requires source code changes and introduces modern technologies to the organization. These changes will need to be thoroughly tested and optimized, leading to possible delays. Therefore, organizations should take small steps by refactoring one or two modules at a time to correct issues and gaps at a smaller scale. Although refactoring may be the most time-consuming, it can provide the best return on investment (ROI) once complete.  

Refactoring Pros: 

  • Cost reduction – Since applications are being optimized for the cloud, refactoring can provide the highest ROI and reduce the total cost of ownership (TCO). 
  • More flexible application architectures – Refactoring allows application owners the opportunity to explore the landscape of services available in the cloud and decide which ones fit best. 
  • Increased resiliency – technologies and concepts like auto-scaling, immutable infrastructure, and automation can increase application resiliency and reliability. Organizations should consider all of these when refactoring. 

Refactoring Cons:

  • A lot of change – Technology and cultural changes can be brutally painful. Cloud migrations often combine both, which compounds the pain. Add the complexity of refactoring, and you may have full-blown mutiny without careful planning and strong leadership. Refactoring migrations are not for the faint of heart, so tread lightly. 
  • Advanced cloud knowledge and experience are needed – Organizations lacking cloud experience may find it challenging to refactor applications by themselves. Organizations may consider using a consulting firm to address skillset gaps. 
  • Lengthy project timelines – Refactoring hundreds of applications doesn’t happen overnight. Organizations need to establish realistic timelines before starting a refactor migration. 

What's your application modernization maturity level

What is Replatform in Cloud?

Replatforming is a happy medium between refactoring and rehosting and applies a series of changes to the application to fit the cloud better without rearchitecting the whole thing versus completely overhauling the application as you would expect from refactoring. Replatforming projects often involve rearchitecting the database to a more cloud-native solution, adding scaling mechanisms, or containerizing applications. 

Replatorming Pros:

  • Reduces cost – If organizations take cost-savings measures during replatforming, they will see a reduction in technical operating expenses. 
  • Acceptable compromise – Replatforming is considered a happy medium of adding features and technical capabilities without jeopardizing migration timelines. 
  • Adds cloud-native features – Replatforming can add cloud technologies like auto-scaling, managed storage services, infrastructure as code (IaC), and more. These capabilities can reduce costs and improve customer experience. 

Replatforming Cons:

  • Scope creep may occur – Organizations may struggle to draw a line in the sand when replatforming. It can be challenging to decide which cloud technologies to prioritize. 
  • Limits the amount of change that can occur – Everything cannot be accomplished at once when replatforming. Technical leaders must decide what can be done given the migration timeline then add the remaining items to a backlog. 
  • Cloud and automation skills needed – Organizations lacking cloud experience may struggle replatforming workloads by themselves. 

Which cloud migration strategy is best for your organization? 

As stated above, it is essential to have clear business objectives for your organization’s cloud migration. Just as important is establishing a timeline for the migration. Both will help technical leaders and application owners decide which strategy is best. Below are some common goals organizations have for migrating to the cloud. 

Common business goals for cloud migrations:

  • Reduce technical debt 
  • Improve customer’s digital experience 
  • Become more agile to respond to change faster 
  • Ensure business continuity 
  • Evacuate on-premises data centers and colocations 
  • Create a culture of automation 

Determining the best migration strategy is key to getting the most out of the cloud and meeting your business objectives. It is common for organizations to use all three of these strategies in tandem and often work with trusted advisors like 2nd Watch to determine and implement the best. When planning your cloud migration strategy, consider these questions:  

Cloud Migration Strategy Considerations:

  • Is there a hard date for migrating the application? 
  • How long will it take to modernize? 
  • What are the costs for “lift and shift,” refactoring, and/or replatforming? 
  • When is the application being retired? 
  • Can the operational team(s) support modern architectures? 

Conclusion 

In today’s world, the cloud is where the most innovation in technology occurs. Companies that want to be a part of modern technology advancements should seriously consider migrating to the cloud. Organizations can achieve successful cloud migrations with the right strategy, clear business goals, and proper skillsets. 

2nd Watch is an AWS Premier Partner, Google Cloud Partner, and Microsoft Gold partner, providing professional and managed cloud services to enterprises. Our subject matter experts and software-enabled services provide you with tested, proven, and trusted solutions in all aspects of cloud migration and application modernization.  

Contact us to schedule a discussion on how we can help you achieve your 2022 cloud modernization objectives. 

By Jacob Acton, 2nd Watch Cloud Consultant 

rss
Facebooktwitterlinkedinmail

Google Cloud, Open-Source and Enterprise Solutions

In 2020, a year where enterprises had to rethink their business models to stay alive, Google Cloud was able to grow 47% and capture market share. If you are not already looking at Google Cloud as part of your cloud strategy, you probably should.

Google has made conscious choices about not locking in customers with proprietary technology. Open-source technology has, for many years, been a core focus for Google, and many of Google Cloud’s solutions can integrate easily with other cloud providers.

Kubernetes (GKE), Knative (Cloud Functions), TensorFlow (Machine Learning), and Apache Beam (Data Pipelines) are some examples of cloud-agnostic tools that Google has open-sourced and which can be deployed to other clouds as well as on-premises, if you ever have a reason to do so.

Specifically, some of Google Cloud’s services and its go-to-market strategy set Google Cloud apart. Modern and scalable solutions like BigQuery, Looker, and Anthos fall into this category. They are best of class tools for each of their use cases, and if you are serious about your digital transformation efforts, you should evaluate their capabilities and understand what they can do for your business.

Three critical challenges we see from our enterprise clients here at 2nd Watch repeatedly include:

  1. How to get started with public cloud
  2. How to better leverage their data
  3. How to take advantage of multiple clouds

Let’s dive into each of these.

Foundation

Ask any architect if they would build a house without a foundation, and they would undisputedly tell you “No.” Unfortunately, many companies new to the cloud do precisely that. The most crucial step in preparing an enterprise to adopt a new cloud platform is to set up the foundation.

Future standards are dictated in the foundation, so building it incorrectly will cause unnecessary pain and suffering to your valuable engineering resources. The proper foundation, that includes your project structure aligned with your project lifecycle and environments, and a CI/CD pipeline to push infrastructure changes through code will enable your teams to become more agile while managing infrastructure in a modern way.

A foundation’s essential blocks include project structure, network segmentation, security, IAM, and logging. Google has a multi-cloud tool called Cloud Operations for logs management, reporting, and alerting, or you can ingest logs into existing tools or set up the brand of firewalls you’re most familiar and comfortable with from the Google Cloud Marketplace. Depending on your existing tools and industry regulations, compliance best practices might vary slightly, guiding you in one direction or another.

DataOps

Google has, since its inception, been an analytics powerhouse. The amount of data moving through Google’s global fiber network at any given time is incredible. Why does this matter to you? Google has now made some of its internal tools that manage large amounts of data available to you, enabling you to better leverage your data. BigQuery is one of these tools.

Being serverless, you can get started with BigQuery on a budget, and it can scale to petabytes of data without breaking a sweat. If you have managed data warehouses, you know that scaling them and keeping them performant is a task that is not easy. With BigQuery, it is.

Another valuable tool, Looker, makes visualizing your data easy. It enables departments to share a single source of truth, which breaks down data silos and enables collaboration between departments with dashboards and views for data science and business analysis.

Hybrid Cloud Solutions

Google Cloud offers several services for multi-cloud capabilities, but let’s focus on Anthos here. Anthos provides a way to run Kubernetes clusters on Google Cloud, AWS, Azure, on-premises, or even on the edge while maintaining a single pane of glass for deploying and managing your containerized applications.

With Anthos, you can deploy applications virtually anywhere and serve your users from the cloud datacenter nearest them, across all providers, or run apps at the edge – like at local franchise restaurants or oil drilling rigs – all with the familiar interfaces and APIs your development and operations teams know and love from Kubernetes.

Currently in preview, soon Google Cloud will release BigQuery Omni to the public. BigQuery Omni lets you extend the capabilities of BigQuery to the other major cloud providers. Behind the scenes, BigQuery Omni runs on top of Anthos and Google takes care of scaling and running the clusters, so you only have to worry about writing queries and analyzing data, regardless of where your data lives. For some enterprises that have already adopted BigQuery, this can mean a ton of cost savings in data transfer charges between clouds as your queries run where your data lives.

Google Cloud offers some unmatched open-source technology and solutions for enterprises you can leverage to gain competitive advantages. 2nd Watch has helped organizations overcome business challenges and meet objectives with similar technology, implementations, and strategies on all major cloud providers, and we would be happy to assist you in getting to the next level on Google Cloud.

2nd Watch is here to serve as your trusted cloud data and analytics advisor. When you’re ready to take the next step with your data, contact Us.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Aleksander Hansson, 2nd Watch Google Cloud Specialist

rss
Facebooktwitterlinkedinmail

5 Cloud Optimization Benefits

When making a cloud migration, a common term that gets tossed around is “cloud optimization”. If your organization is new to the cloud, optimizing your environment is essential to ensuring your migration pays off quickly and continues to do so in the long term.

If your organization is already established in the cloud, you may observe higher costs than expected due to cloud sprawl, under-utilized resources, and improper allocation of resources. Cloud optimization helps your organization reduce these costs and improve overall efficiency in the cloud

Cloud Optimization

What is cloud optimization?

The definition of cloud optimization may vary from one cloud service provider to another, but generally, cloud optimization is the process of analyzing, configuring, provisioning, and right-sizing cloud resources to maximize performance and minimize waste for cost efficiency. The reality is that many organizations’ cloud environments are configured in an inefficient manner that creates unnecessary cloud spend. With proper cloud optimization tools and practices, these unnecessary costs can be eliminated.

While cloud optimization is mostly discussed in terms of cloud spend, cost optimization is simply a faucet of cloud optimization and can extend to overall performance and organizational efficiency. Some examples of cloud optimization practices that your organization can adopt right now include:

  • Right-sizing: Matching your cloud computing instance types (i.e. containers and VMs) and sizes with enough resources to sufficiently meet your workload performance and capacity needs to ensure the lowest cost possible.
  • Family Refresh: Replace outdated systems with updated ones to maximize performance.
  • Autoscaling: Scale your resources according to your application demand so you are only paying for what you use.
  • Applying Discounts: Reserved instances (RIs) allow companies to commit to cloud resources for a long period of time. The longer the discount and the more a company is prepared to pre-pay at the beginning of a period, the greater the discount will be. Discounted pricing models like RIs and spot instances will drive down your cloud costs when used according to your workload.
  • Identity use of RIs: Identifying the use of RIs can be an effective way to save money in the cloud if used for suitable loads.
  • Eliminate Waste: Regulating unused resources is a core component of cloud optimization. If you haven’t already considered cloud optimization practices, you are most likely using more resources than necessary or not certain resources to their full capacity.

Why is cloud optimization important?

Overspending in the cloud is a common issue many organizations face by allocating more resources to a workload than necessary. Integrating cloud optimization practices can reap many benefits for your cloud infrastructure and your organization, including the following:

  • Cloud Efficiency: When workload performance, compliance, and cost are continually balanced against the best-fit infrastructure in real-time, efficiency is achieved. Implementing cloud optimization practices will eliminate as much cloud resource waste as possible, increasing the performance of your cloud environment.
  • Cost Savings: Although cloud optimization comes in a variety of forms, cost optimization is the most important component for many organizations. By reducing waste in the cloud, costs are reduced as a byproduct.
  • Greater Visibility: Cloud optimization practices utilize analytics to provide visibility into your cloud environment to make data-driven decisions. Implementing optimization tools also provides cost visibility, so your organization has a better perspective on cloud spend.
  • Increased Productivity: Once a cloud optimization strategy is implemented, IT teams will spend less time trying to solve problems because an optimized environment prevents problems before they occur.
  • Organizational Innovation & Efficiency: Implementing cloud optimization often is accompanied by a cultural shift within organizations such as improved decision-making and collaboration across teams.

Benefits of cloud optimization

What are cloud optimization services?

Public cloud services providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud have over 500,000 distinct prices and technical combinations that can overwhelm the most experienced IT organizations and business units. Luckily, there are already services that can help your organization achieve the cloud optimization it needs to drive business outcomes. Cloud optimization services help your organization identify areas of improvement in your cloud for cost savings and efficiency, create an optimization strategy for your organization, and can manage your cloud infrastructure for continuous optimization.

At 2nd Watch, we take a holistic approach to cloud optimization. We have developed various optimization pillars based on real-time data to ensure your cloud environments are running as efficiently as possible. Behind our solutions for cloud optimization is a team of experienced data scientists and architects that help you maximize the performance and returns of your cloud assets. Our services offerings for cloud optimization at 2nd Watch include:

What are cloud optimization services

  • Strategy & Planning: Define your optimization strategy with our proven methodology, tailored to meet your desired business outcomes and maximize your results.
  • Cost Optimization Assessment: Gain the visibility necessary to make data-driven decisions. Identify opportunities across our Pillars of Optimization to maximize cost savings and cloud environment efficiency.
  • Spot Instance & Container Optimization: Save up to 90% compared to traditional cloud infrastructure by running both Instances/VMs and Containers on spot resources for relevant workloads.
  • Multi-Cloud Optimization: Cloud optimization on a single public cloud is one challenge but optimizing a hybrid cloud is a whole other challenge. Apply learning from your assessment to optimize your cloud environment for AWS, Microsoft Azure, Google Cloud, and VMware on AWS.
  • Forecasting, Modeling, & Analytics: Understand your past usage, and model and forecast your future needs with the analytical data needed for visibility across your organization.

Our cloud optimization process starts with data, and you have a lot of it. But data alone can lead you astray yielding wasted resources and overspend. There are many other factors to evaluate, such as EDP/EA agreements and Savings Plans/RI Purchases, to ensure you choose the most cost-effective option for your business. Strategically, our data scientists and architects map connections between data and workloads. We then make correlations between how workloads interact with each resource and the optimal financial mechanism to reach your cloud optimization goals.

Cloud Optimization with 2nd Watch

Working with a managed cloud service provider like 2nd Watch will give your organization the expertise needed for cloud optimization. If you want to learn more about cost savings or are interested in fully optimizing your cloud infrastructure, contact us to take your next steps.

 

rss
Facebooktwitterlinkedinmail

Datacenter Migration to the Cloud: Why Your Business Should Do it and How to Plan for it

Datacenter migration is ideal for businesses who are looking to exit or reduce on-premises datacenters, migrate workloads as is, modernize apps, or leave another cloud. Executing migrations, however, is no small task, and as a result, there are many enterprise workloads that still run in on-premises datacenters. Often technology leaders want to migrate more of their workloads and infrastructure to a private or public cloud, but they are turned off by the seemingly complex processes and strategies involved in cloud migration or lack the internal cloud skills necessary to make the transition.

Data Center Migration

 

Though datacenter migration can be a daunting business initiative, the benefits of moving to the cloud are well worth the effort, and the challenges of the migration process can be mitigated by creating a strategy, using the correct tools, and utilizing professional services. Datacenter migration provides a great opportunity to revise, rethink, and improve an organization’s IT architecture. It also ultimately impacts business-critical drivers such as reducing capital expenditure, decreasing ongoing cost, improving scalability and elasticity, improving time-to-market, enacting digital transformation, and attaining improvements in security and compliance.

What are Common Datacenter Migration Challenges?

To ensure a seamless and successful migration to the cloud, businesses should be aware of the potential complexities and risks associated with a datacenter migration. The complexities and risks are addressable, and if addressed properly, organizations can create not only an optimal environment for their migration project, but provide the launch point for business transformation.

Not Understanding Workloads

While cloud platforms are touted as flexible, it is a service-oriented resource and should be treated as such. To be successful in cloud deployment, organizations need to be aware of performance, compatibility, performance requirements (including hardware, software, and IOPS), required software, and adaptability to changes in their workloads. Teams need to run their cloud workloads on the cloud service that is best aligned with the needs of the application and the business.

Not Understanding Licensing

Cloud marketplaces allow businesses to easily “rent” software at an hourly rate. Though the ease of this purchase is enticing, it’s important to remember that it’s not the only option out there. Not all large vendors offer licensing mobility for all applications outside the operating system. In fact, companies should leverage existing relationships with licensing brokers. Just because a business is migrating to the cloud doesn’t mean that a business should abandon existing licensing channels. Organizations should familiarize themselves with their choices for licensing to better maximize ROI.

Not Looking for Opportunities to Incorporate PaaS

Platform as a service (PaaS) is a cloud computing model where a cloud service provider delivers hardware and software tools to users over the internet versus a build-it-yourself Infrastructure as a Service (IaaS) model. The PaaS provider abstracts everything—servers, networks, storage, operating system software, databases, development tools—enabling teams to focus on their application. This enables PaaS customers to build, test, deploy, run, update and scale applications more quickly and inexpensively than they could if they had to build out and manage an IaaS environment on top of their application. While businesses shouldn’t feel compelled to rewrite all their network configurations and operating environments, they should see where they can have quick PaaS wins to replace aging systems.

Not Proactively Preparing for Cloud Migration

Building a new datacenter is a major IT event and usually goes hand-in-hand with another significant business event, such as an acquisition, or outgrowing the existing datacenter. In the case of moving to a new on-premises datacenter, the business will slow down as the company takes on a physical move. Migrating to the cloud is usually not coupled with an eventful business change, and as a result, business does not stop when a company chooses to migrate to the cloud. Therefore, a critical part of cloud migration success is designing the whole process as something that can run along with other IT changes that occur on the same timeline. Application teams frequently adopt cloud deployment practices months before their systems actually migrate to the cloud. By doing so, the team is ready before their infrastructure is even prepared, which makes cloud migration a much smoother event. Combining cloud events with other changes in this manner will maximize a company’s ability to succeed.

Treating and Running the Cloud Environment Like Traditional Datacenters

It seems obvious that cloud environments should be treated differently from traditional datacenters, but this is actually a common pitfall for organizations to fall in. For example, preparing to migrate to the cloud should not include traditional datacenter services, like air conditioning, power supply, physical security, and other datacenter infrastructure, as a part of the planning. Again, this may seem very obvious, but if a business is used to certain practices, it can be surprisingly difficult to break entrenched mindsets and processes.

How to Plan for a Datacenter Migration

While there are potential challenges associated with datacenter migration, the benefits of moving from physical infrastructures, enterprise datacenters, and/or on-premises data storage systems to a cloud datacenter or a hybrid cloud system is well worth the effort.

Now that we’ve gone over the potential challenges of datacenter migration, how do businesses enable a successful datacenter migration while effectively managing risk?

Below, we’ve laid out a repeatable high-level migration strategy that is broken down into four phases: Discovery, Planning, Execution, and Optimization. By leveraging a repeatable framework as such, organizations create the opportunity to identify assets, minimize migration costs and risks using a multi-phased migration approach, enable deployment and configuration, and finally, optimize the end state.

Phase 1: Discovery

During the Discovery phase, companies should understand and document the entire datacenter footprint. This means understanding the existing hardware mapping, software applications, storage layers (databases, file shares), operating systems, networking configurations, security requirements, models of operation (release cadence, how to deploy, escalation management, system maintenance, patching, virtualization, etc.), licensing and compliance requirements, as well as other relevant assets.

The objective of this phase is to have a detailed view of all relevant assets and resources of the current datacenter footprint.

The key milestones in the Discovery phase are:

  • Creating a shared datacenter inventory footprint: Every team and individual who is a part of the datacenter migration to the cloud should be aware of the assets and resources that will go live.
  • Sketching out an initial cloud platform foundations design: This involves identifying centralized concepts of the cloud platform organization such as folder structure, Identity and Access Management (IAM)  model, network administration model, and more.

As a best practice, companies should engage in cross-functional dialogue within their organizations, including teams from IT to Finance to Program Management, ensuring everyone is aligned on changes to support future cloud processes. Furthermore, once a business has migrated from a physical datacenter to the cloud, they should consider whether their datacenter team is trained to support the systems and infrastructure of the cloud provider.

Phase 2: Planning

When a company is entering the Planning phase, they are leveraging the assets and deliverables gathered in the Discovery phase to create migration waves to be sequentially deployed into non-production and production environments.

Typically, it is best to target non-production migration waves first, which helps identify the sequence of waves to migrate first. To start, consider the following:

  • Mapping the current server inventory to the cloud platform’s machine types: Each current workload will generally run on a virtual machine type with similar computing power, memory, and disk. Oftentimes though, the current workload is overprovisioned, so each workload should be evaluated to ensure that it is migrated onto the right VM for that given workload.
  • Timelines: Businesses should lay out their target dates for each migration project.
  • Workloads in each grouping: Figure out what migration waves are grouped by i.e. non-production vs. production applications.
  • The cadence of code releases: Factor in any upcoming code releases as this may impact the decision of whether to migrate sooner or later.
  • Time for infrastructure deployment and testing: Allocate adequate time for testing infrastructures before fully moving over to the cloud.
  • The number of application dependencies: Migration order should be influenced by the number of application dependencies. The applications with the fewest dependencies are generally good candidates for migration first. In contrast, wait to migrate an application that depends on multiple databases.
  • Migration complexity and risk: Migration order should also take complexity into consideration. Tackling simpler aspects of the migration first will generally yield a more successful migration.

As mentioned above, the best practice for migration waves is to start with more predictable and simple workloads. For instance, companies should start with migrating file shares first, then databases and domain controlled, and save the apps for last. However, sometimes the complexity and dependencies don’t allow for a straightforward migration. In these cases, utilizing an experienced service provider who has experience with these complex environments will be prudent.

Phase 3: Execution

Once companies have developed a plan, they can bring them to fruition in the Execution phase. Here, businesses will need to be deliberate about the steps they take and the configurations they develop.

In the Execution phase, companies will put into place infrastructure components and ensure they are configured appropriately, like IAM, networking, firewall rules, and Service Accounts. Here is also where teams should test the applications on the infrastructure configurations to ensure that they have access to their databases, file shares, web servers, load balancers, Active Directory servers, and more. Execution also includes using logging and monitoring to ensure applications continue to function with the necessary performance.

In order for the Execution phase to be successful, there needs to be agile application debugging and testing. Moreover, organizations should have both a short and long-term plan for resolving blockers that may come up during the migration. The Execution phase is iterative and the goal should be to ensure that applications are fully tested on the new infrastructure.

Phase 4: Optimization

The last phase of a datacenter migration project is Optimization. After a business has migrated its workloads to the cloud, it should conduct periodic reviews and planning to optimize the workloads. Optimization includes the following activities:

  • Resizing machine types and disks
  • Leveraging software like Terraform for more agile and predictable deployments
  • Improving automation to reduce operational overhead
  • Bolstering integration with logging, monitoring, and alerting tools
  • Adopting managed services to reduce operational overhead

Cloud services provide visibility into resource consumption and spending, and organizations can more easily identify the compute resources they are paying for. Additionally, businesses can identify virtual machines they need or don’t need. By migrating from a traditional datacenter environment to a cloud environment, teams will be able to optimize their workloads due to the powerful tools that cloud platforms provide.

How do I take the first step in datacenter migration?

While undertaking a full datacenter migration is a significant project, it is worthwhile. The migration framework we’ve provided can help any business break down the process into manageable stages and move fully to the cloud.

When you’re ready to take the first step, we’re here to help to make the process even easier. Contact a 2nd Watch advisor today to get started with your migration to the cloud.

 

rss
Facebooktwitterlinkedinmail