1-888-317-7920 info@2ndwatch.com

Application Modernization in the Cloud

The cloud market is maturing, and organizations worldwide are well into implementing their cloud strategies. In fact, a recent McKinsey survey estimates that, by 2022, 75% all workloads will be running in either public or private clouds. Additionally, according to VMWare, 72% of businesses are looking for a path forward for their existing applications, and it is important to consider an app modernization strategy as part of these migration efforts. Whether it be a desire to containerize, utilize cloud-native services, increase agility, or realize cost savings, the overall goal should be to deliver business value faster in the rapidly changing cloud environment.

Application modernization has a focus on legacy or “incumbent” line of business applications, and approaches range anywhere between re-hosting from the datacenter to cloud, to full cloud native application rewrites. We prefer to take a pragmatic approach, which is to address issues with legacy applications that hinder organizations from realizing the benefits of modern software and cloud native approaches, while retaining as much of the intellectual property that has been built into incumbent applications over the years as possible. Additionally, we find ways of augmenting existing code bases to make use of modern paradigms.

When approaching legacy software architecture, people often discuss breaking apart monolithic applications and microservices. However, the most important architectural decisions should be centered around how to best allow the application to function well in the cloud, with scalability, fault-tolerance, and observability all being important aspects. A popular approach is to consider the tenants of the 12-Factor App to help guide these decisions.

Architecture discussions go hand in hand with considering platforms. Containerization and serverless functions are popular approaches, but equally valid is traditional VM clustering or even self-hosting. Additionally, we start to think about utilizing cloud services to offload some application complexity, such as AWS S3 for document storage or AWS KMS for key management. This leads us to consider different cloud providers themselves for best fit for the organization and the applications overall, whether it be AWS, Azure, GCP, or hybrid-cloud solutions.

Another very important aspect of application modernization, especially in the cloud, is ensuring that applications have proper automation. Strong continuous integration and continuous deployment (CI/CD) pipelines should be implemented or enhanced for legacy applications. Additionally, we apply CD/CI automation for deploying database migrations and performing infrastructure-as-code (IaaC) updates, and ensure paradigms like immutable infrastructure (i.e. pre-packaging machine images or utilizing containerization) are utilized.

Last, there is an important cultural aspect to modernization from an organizational to team level. Organizations must consider modernization a part of their overall cloud strategy and support their development teams in this area. Development teams must adapt to new paradigms to understand and best utilize the cloud – adopting strong DevOps practices and reorganizing teams along business objectives instead of technology objectives is key.

By implementing a solid modernization strategy, businesses can realize the benefits the cloud provides, deliver value to their customers more rapidly, and compete in a rapidly changing cloud environment. If you’re ready to implement a modernization strategy in your organization, contact us for guidance on how to get started.

-James Connell, Sr Cloud Consultant

Facebooktwitterlinkedinmailrss

What is Application Modernization?

Application modernization is the process of migrating an incumbent or legacy software application to modern development patterns, paradigms and platforms with the explicit purpose of improving business value. This implies improving the software architecture, application infrastructure, development techniques and business strategy using a cloud native approach.

Modernizing software architecture is often described as splitting a monolithic codebase but can imply any improvements to the software itself, such as decoupling of components or addressing tech debt in the codebase. Other examples might be finding new design patters that allow for scale, addressing resiliency within an application or improving observability through logs and tracing.

We often think of application modernization in the context of cloud, and when planning a migration to cloud or modernizing an application already in the cloud, we look at what services and platforms are beneficial to the effort. Utilizing a service such as Amazon S3 for serving documents instead of a network share or utilizing ElasticSearch instead of the database for search are examples of infrastructure improvements. Containerization and other serverless platforms are also considered.

Development techniques also need to be addressed in the context of modernization. Developers should focus on the parts of the application that deliver value to customers and provide competitive advantage. If developers are focused on maintenance, long manual deployments, bugs, and log investigation, they are unable to deliver value quickly. When working with modern distributed cloud applications, teams need to follow strong DevOps practices in order to be successful. CI/CD, unit testing, diagnostics and alerting are all areas that development teams can focus on modernizing.

Legacy Systems

In this context, legacy software refers to an incumbent application or system that blocks or slows an organization’s ability to accomplish its business goals. These systems still provide value and are great candidates for modernization.

Legacy can imply many things, but some common characteristics of legacy apps are:

  • Applications that run older libraries, outdated frameworks, or development platforms or operating systems that are no longer supported.
  • Architectural issues – monolithic or tightly coupled systems can lead to difficulties in deployment, long release cycles and high defect rates.
  • Large amounts of technical debt, dead or unused code, teams who no longer understand how older parts of the application work, etc.
  • Security issues caused by technical debt, outdated security paradigms, unpatched operating systems, and improper secret management.
  • Lack instrumentation with no way to observe the application.
  • Maintain session state on the client (require sticky sessions, etc.).
  • Manually deployed or must be deployed in specific ways due to tight coupling.

Pillars of Modernization

When approaching a modernization project, we specifically look to ensure the following:

Flexible Architecture

The modernization initiative should follow a distributed computing approach, meaning it should take advantage of concepts such as elasticity, resiliency, and containerization. Converting applications to adhere to the principals of the “12-factor app” in order to take advantage of containerization is a prime example.

Automation

The application must be built, tested and deployed using modern CI/CD processes. Older source control paradigms such as RCS or SVN should be replaced with distributed version control systems (git). Infrastructure as code should be included as part of the CI/CD system.

Observability

Holistically integrate logs, metrics, and events enabling “the power to ask new questions of your system, without having to ship new code or gather new data in order to ask those new questions” (Charity Majors https://www.honeycomb.io/blog/observability-a-manifesto).  Observability is key to understanding performance, error rates, and communication patterns and enables the ability to measure your system and establish baselines.

Culture

Application teams should be aligned along business function, not technology, meaning multi-disciplinary teams that can handle operations (DevOps), database, testing (QA) and development. A culture of ownership is important in a cloud-native application.

Examples

Application Modernization is not:

  • Just containerization – To take full advantage of containerization, applications must be properly architected (12-factor), instrumented for observability and deployed using CI/CD.
  • Just technical solutions adapting the latest framework or technology – The technology might be “modern” in a sense but doesn’t necessarily address cultural or legacy architectural issues.
  • Just addressing TCO – Addressing cost savings without addressing legacy issues does not constitute modernization.
  • Just running a workload in the cloud
  • Just changing database platforms – Licensing issues or the desire to move to open source clustered cloud databases does not equate to modernization.

Application modernization includes, among others, combinations of:

  • Moving a SaaS application from a single to multi-tenant environment.
  • Breaking up a monolithic application into microservices.
  • Applying event driven architecture to decouple and separate concerns.
  • Utilizing cloud services such as S3 to replace in-house solutions.
  • Refactoring to use NoSQL technologies such as MongoDB, ElasticSearch, or Redis.
  • Containerization and utilization of PaaS technologies such as Kubernetes or Nomad.
  • Utilization of Serverless (FaaS) technologies such as AWS Lambda, Azure Functions, OpenFaas, or Kubeless.
  • Creating strong API abstractions like REST or gRPC and utilizing API Gateways.
  • Transitioning to client-side rendering frameworks (React, Vue.js, etc.) and serverless edge deployment of UI assets, removing the webserver.
  • Moving long running synchronous tasks to asynchronous batch processes.
  • Utilizing saga patterns or business process workflows.

If you’re ready to start considering application modernization in your organization, contact us for guidance on how to get started.

-James Connell, Sr Cloud Consultant

 

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: 5 Strategies to Maximize Your Cloud’s Value – Create Competitive Advantage from your Data

AWS Data Expert, Saunak Chandra, joins today’s episode to break down the first of five strategies used to maximize your cloud’s value – creating competitive advantage from your data. We look at tactics including Amazon Redshift, RA3 node type, best practices for performance, data warehouses, and varying data structures. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.

Facebooktwitterlinkedinmailrss

Google Cloud, Open-Source and Enterprise Solutions

In 2020, a year where enterprises had to rethink their business models to stay alive, Google Cloud was able to grow 47% and capture market share. If you are not already looking at Google Cloud as part of your cloud strategy, you probably should.

Google has made conscious choices about not locking in customers with proprietary technology. Open-source technology has, for many years, been a core focus for Google, and many of Google Cloud’s solutions can integrate easily with other cloud providers.

Kubernetes (GKE), Knative (Cloud Functions), TensorFlow (Machine Learning), and Apache Beam (Data Pipelines) are some examples of cloud-agnostic tools that Google has open-sourced and which can be deployed to other clouds as well as on-premises, if you ever have a reason to do so.

Specifically, some of Google Cloud’s services and its go-to-market strategy set Google Cloud apart. Modern and scalable solutions like BigQuery, Looker, and Anthos fall into this category. They are best of class tools for each of their use cases, and if you are serious about your digital transformation efforts, you should evaluate their capabilities and understand what they can do for your business.

Three critical challenges we see from our enterprise clients here at 2nd Watch repeatedly include:

  1. How to get started with public cloud
  2. How to better leverage their data
  3. How to take advantage of multiple clouds

Let’s dive into each of these.

Foundation

Ask any architect if they would build a house without a foundation, and they would undisputedly tell you “No.” Unfortunately, many companies new to the cloud do precisely that. The most crucial step in preparing an enterprise to adopt a new cloud platform is to set up the foundation.

Future standards are dictated in the foundation, so building it incorrectly will cause unnecessary pain and suffering to your valuable engineering resources. The proper foundation, that includes your project structure aligned with your project lifecycle and environments, and a CI/CD pipeline to push infrastructure changes through code will enable your teams to become more agile while managing infrastructure in a modern way.

A foundation’s essential blocks include project structure, network segmentation, security, IAM, and logging. Google has a multi-cloud tool called Cloud Operations for logs management, reporting, and alerting, or you can ingest logs into existing tools or set up the brand of firewalls you’re most familiar and comfortable with from the Google Cloud Marketplace. Depending on your existing tools and industry regulations, compliance best practices might vary slightly, guiding you in one direction or another.

DataOps

Google has, since its inception, been an analytics powerhouse. The amount of data moving through Google’s global fiber network at any given time is incredible. Why does this matter to you? Google has now made some of its internal tools that manage large amounts of data available to you, enabling you to better leverage your data. BigQuery is one of these tools.

Being serverless, you can get started with BigQuery on a budget, and it can scale to petabytes of data without breaking a sweat. If you have managed data warehouses, you know that scaling them and keeping them performant is a task that is not easy. With BigQuery, it is.

Another valuable tool, Looker, makes visualizing your data easy. It enables departments to share a single source of truth, which breaks down data silos and enables collaboration between departments with dashboards and views for data science and business analysis.

Hybrid Cloud Solutions

Google Cloud offers several services for multi-cloud capabilities, but let’s focus on Anthos here. Anthos provides a way to run Kubernetes clusters on Google Cloud, AWS, Azure, on-premises, or even on the edge while maintaining a single pane of glass for deploying and managing your containerized applications.

With Anthos, you can deploy applications virtually anywhere and serve your users from the cloud datacenter nearest them, across all providers, or run apps at the edge – like at local franchise restaurants or oil drilling rigs – all with the familiar interfaces and APIs your development and operations teams know and love from Kubernetes.

Currently in preview, soon Google Cloud will release BigQuery Omni to the public. BigQuery Omni lets you extend the capabilities of BigQuery to the other major cloud providers. Behind the scenes, BigQuery Omni runs on top of Anthos and Google takes care of scaling and running the clusters, so you only have to worry about writing queries and analyzing data, regardless of where your data lives. For some enterprises that have already adopted BigQuery, this can mean a ton of cost savings in data transfer charges between clouds as your queries run where your data lives.

Google Cloud offers some unmatched open-source technology and solutions for enterprises you can leverage to gain competitive advantages. 2nd Watch has helped organizations overcome business challenges and meet objectives with similar technology, implementations, and strategies on all major cloud providers, and we would be happy to assist you in getting to the next level on Google Cloud.

2nd Watch is here to serve as your trusted cloud data and analytics advisor. When you’re ready to take the next step with your data, contact Us.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Aleksander Hansson, 2nd Watch Google Cloud Specialist

Facebooktwitterlinkedinmailrss

How to Federate Amazon Redshift Access with Azure Active Directory

Single sign-on (SSO) is a tool that solves fundamental problems, especially in midsize and large organizations with lots of users.

End users do not want to have to remember too many username and password combinations. IT administrators do not want to have to create and manage too many different login credentials across enterprise systems. It is a far more manageable and secure approach to federate access and authentication through a single identity provider (IdP).

As today’s enterprises rely on a wide range of cloud services and legacy systems, they have increasingly adopted SSO via an IdP as a best practice for IT management. All access and authentication essentially flows through the IdP wherever it is supported. Employees do not have to remember multiple usernames and passwords to access the tools they need to do their jobs. Just as importantly, IT teams prevent an administrative headache. They manage a single identity per user, which makes tasks like removing access when a person leaves the organization much simpler and less prone to error.

The same practice extends to AWS. As we see more customers migrate to the cloud platform, we hear a growing need for the ability to federate access to Amazon Redshift when they use it for their data warehouse needs.

Database administration used to be a more complex effort. Administrators had to figure out which groups a user belonged to, which objects a user or group were authorized to use, and other needs—in manual fashion. These user and group lists—and their permissions—were traditionally managed within the database itself, and there was often a lot of drift between the database and the company directory.

Amazon Redshift administrators face similar challenges if they opt to manage everything within Redshift itself. There is a better way, though. They can use an enterprise IdP to federate Redshift access, managing users and groups within the IdP and passing the credentials to Amazon Redshift at login.

We increasingly hear from our clients, “We use Azure Active Directory (AAD) for identity management—can we essentially bring it with us as our IdP to Amazon Redshift?”

They want to use AAD with Redshift the way they use it elsewhere, to manage their users and groups in a single place to reduce administrative complexity. With Redshift, specifically, they also want to be able to continue managing permissions for those groups in the data warehouse itself. The good news is you can do this and it can be very beneficial.

Without a solution like this, you would approach database administration in one of two alternative ways:

  1. You would provision and manage users using AWS Identity and Access Management (IAM). This means, however, you will have another identity provider to maintain—credentials, tokens, and the like—separate from an existing IdP like AAD.
  2. You would do all of this within Redshift itself, creating users (and their credentials) and groups and doing database-level management. But this creates similar challenges to legacy database management, and when you have thousands of users, it simply does not scale.

Our technical white paper outlines how to federate access to Amazon Redshift using Azure Active Directory as your IdP, passing user and group information through to the database at login.

Download the technical white paper

-Rob Whelan, Data & Analytics Practice Director

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: You’re on the Cloud. Now What? 5 Strategies to Maximize Your Cloud’s Value

You migrated your applications to the cloud for a reason. Now that you’re there, what’s next? How do you take advantage of your applications and data that reside in the cloud? What should you be thinking about in terms of security and compliance? In this first episode of a 5-part series, we discuss 5 strategies you should consider to maximize the value of being on the cloud. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.

Facebooktwitterlinkedinmailrss

How to Federate Amazon Redshift Access with Okta

Single sign-on (SSO) is a tool that solves fundamental problems, especially in midsize and large organizations with lots of users.

End users do not want to have to remember too many username and password combinations. IT administrators do not want to have to create and manage too many different login credentials across enterprise systems. It is a far more manageable and secure approach to federate access and authentication through a single identity provider (IdP).

As today’s enterprises rely on a wide range of cloud services and legacy systems, they have increasingly adopted SSO via an IdP as a best practice for IT management. All access and authentication essentially flows through the IdP wherever it is supported. Employees do not have to remember multiple usernames and passwords to access the tools they need to do their jobs. Just as importantly, IT teams prevent an administrative headache: They manage a single identity per user, which makes tasks like removing access when a person leaves the organization much simpler and less prone to error.

The same practice extends to AWS. As we see more customers migrate to the cloud platform, we hear a growing need for the ability to federate access to Amazon Redshift when they use it for their data warehouse needs.

Database administration used to be a more complex effort. Administrators had to figure out which groups a user belonged to, which objects a user or group were authorized to use, and other needs—in manual fashion. These user and group lists—and their permissions—were traditionally managed within the database itself, and there was often a lot of drift between the database and the company directory.

Amazon Redshift administrators face similar challenges if they opt to manage everything within Redshift itself. There is a better way, though. They can use an enterprise IdP to federate Redshift access, managing users and groups within the IdP and passing the credentials to Amazon Redshift at login.

We increasingly hear from our clients, “We use Okta for identity management—can we essentially bring it with us as our IdP to Amazon Redshift?” They want to use Okta with Redshift the way they use it elsewhere, to manage their users and groups in a single place to reduce administrative complexity. With Redshift, specifically, they also want to be able to continue managing permissions for those groups in the data warehouse itself. The good news is you can do this and it can be very beneficial.

Without a solution like this, you would approach database administration in one of two alternative ways:

  1. You would provision and manage users using AWS Identity and Access Management (IAM). This means, however, you will have another identity provider to maintain—credentials, tokens, and the like—separate from an existing IdP like Okta.
  2. You would do all of this within Redshift itself, creating users (and their credentials) and groups and doing database-level management. But this creates similar challenges to legacy database management, and when you have thousands of users, it simply does not scale.

Our technical white paper covers how to federate access to Amazon Redshift using Okta as your IdP, passing user and group information through to the database at login. We outline the step-by-step process we follow when we implement this solution for 2nd Watch clients, including the modifications we found were necessary to ensure everything worked properly. We explain how to set up a trial account at Okta.com, build users and groups within the organization’s directory, and enable single sign-on (SSO) into Amazon redshift.

Download the technical white paper

-Rob Whelan, Data & Analytics Practice Director

Facebooktwitterlinkedinmailrss

Are You Ready to Migrate Your Data to the Cloud? Answer These 4 Questions to Find Out

Many companies are already storing their data in the cloud and even more are considering making the migration to the cloud. The cloud offers unique benefits for data access and consolidation, but some businesses choose to keep their data on-prem for various reasons. Data migration isn’t a one size fits all formula, so when developing your data strategy, think about your long-term needs and goals for optimal results.

We recommend evaluating these 4 questions before making the decision to migrate your data to the cloud:

1. Why do you want to move your data?

Typically, there are two reasons businesses find themselves in a position of wanting to change their IT infrastructure. Either your legacy platform is reaching end of life (EOL) and you’re forced to make a change, or it’s time to modernize. If you’re faced with the latter – your business data expanded beyond the EOL platform – it’s a good indication migrating to the cloud is right for you. The benefits of cloud-based storage can drastically improve your business agility.

2. What is important to you?

You need to know why you’re choosing the platform you are deploying and how it’s going to support your business goals better than other options. Three central arguments for cloud storage – that are industry and business agnostic – include:

  • Agility: If you need to move quickly (and what business doesn’t?), the cloud is for you. It’s easy to start, and you can spin up a cloud environment and have a solution deployed within minutes or hours. There’s no capital expense, no server deployment, and no need for an IT implementation team.
  • Pay as you go: If you like starting small, testing things before you go all in, and only paying for what you use, the cloud is for you. It’s a very attractive feature for businesses hesitant to move all their data at once. You get the freedom and flexibility to try it out, with minimal financial risk. If it’s not a good fit for your business, you’ve learned some things, and can use the experience going forward. But chances are, the benefits you’ll find once utilizing cloud features will more than prove their value.
  • Innovation: If you want to ride the technology wave, the cloud is for you. Companies release new software and features to improve the cloud every day, and there’s no long release cycles. Modernized technologies and applications are available as soon as they’re released to advance your business capabilities based on your data.

3. What is your baseline?

The more you can plan for potential challenges in advance, the better. As you consider data migration to the cloud, think about what your data looks like today. If you have an on-prem solution, like a data warehouse, lift and shift is an attractive migration plan because it’s fairly easy.

Many businesses have a collection of application databases and haven’t yet consolidated their data. They need to pull the data out, stage it, and store it without interfering with the applications. The main cloud providers offer different, but similar options to get your data into a place where it can be used. AWS offers S3, Google Cloud has Cloud Storage, and Azure provides Blob storage. Later, you can pull the data into a data warehousing solution like AWS Redshift, Google BigQuery, Microsoft Synapse, or Snowflake.

4. How do you plan to use your data?

Always start with a business case and think strategically about how you’ll use your data. The technology should fit the business, not the other way around. Once you’ve determined that, garner the support and buy-in of sponsors and stakeholders to champion the proof of concept. Bring IT and business objectives together by defining the requirements and the success criteria. How do you know when the project is successful? How will the data prove its value in the cloud?

As you move forward with implementation, start small, establish a reasonable timeline, and take a conservative approach. Success is crucial for ongoing replication and investment. Once everyone agrees the project has met the success criteria, celebrate loudly! Demonstrate the new capabilities, and highlight overall business benefits and impact, to build and continue momentum.

Be aware of your limitations

When entering anything unknown, remember that you don’t know what you don’t know. You may have heard things about the cloud or on-prem environments anecdotally, but making the decision of when and how to migrate data is too important to do without a trusted partner. You risk missing out on big opportunities, or worse, wasting time, money, and resources without gaining any value.

2nd Watch is here to serve as your trusted cloud advisor, so when you’re ready to take the next step with your data, contact Us.

Learn more about 2nd Watch Data and Analytics services

-Sam Tawfik, Sr Product Marketing Manager, Data & Analytics

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: Moving to the Cloud for the Right Reasons

When you’re considering moving to the cloud, it’s important to take a personal examination of your goals for migrating, outside of the basic benefits achievable with the cloud. To maximize the value of the cloud, you have to make sure you’re moving for the right reasons. Today we discuss just that with our very own 2nd Watch CEO, Doug Schneider. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.

Facebooktwitterlinkedinmailrss

Migrating Data to Snowflake – An Overview

When considering migrating your data to the cloud, everyone’s familiar with the three major cloud providers – AWS, Google Cloud, and Microsoft Azure. But there are a few other players you should also take note of. Snowflake is a leading cloud data platform that offers exceptional design, scalability, simplicity, and return on investment (ROI).

What is Snowflake?

The Snowflake cloud data platform was born in the cloud for data warehousing. It’s built entirely to maximize cloud usage and designed for almost unlimited scalability. Users like the simplicity, and businesses gain significant ROI from the wide range of use cases Snowflake supports.

Out of the box, Snowflake is easy to interact with through its web interface. Without having to download any applications, users can connect with Snowflake and create additional user accounts for a fast and streamlined process. Additionally, Snowflake performs as a data platform, rather than just a data warehouse. Data ingestion is cloud native and existing tools enable effortless data migration.

Business Drivers

The decision to migrate data to a new cloud environment, or data warehousing solution, needs to be based on clearly defined value. Why are you making the transition? What’s your motivation? Maybe you need to scale up, or there’s some sort of division or business requirement for migration. Often times, companies have a particular implementation that needs to change, or they have specific needs that aren’t being met by their current data environment.

Take one of our clients, for instance. When the client’s company was acquired, they came to utilize a data warehouse shared by all the companies the acquiring company owned. When the client was eventually sold, they needed their own implementation and strategy for migrating data into the cloud. Together, we took the opportunity to evaluate some of the newer data platform tools, like Snowflake, for their specific business case and to migrate quickly to an independent data platform.

With Snowflake, set up was minimal and supported our client’s need for a large number of database users. Migrating from the shared data warehouse to Snowflake was relatively easy, and it gave all users access through a simple web interface. Snowflake also provided more support for unstructured data usage, which simplified querying things like JSON or nested data.

Implementation

Migrating data to Snowflake is generally a smooth transition because Snowflake accepts data from your existing platform. For instance, if data is stored in Amazon S3, Google Cloud, or Azure, you can create Snowflake environments in each then ingest the data using SQL commands and configuration. Not only can you run all the same queries with minor tweaks and get the same output, but Snowflake also fits additional needs and requirements. If you’ve worked in SQL in any manner – on an application database, or in data warehousing – training is minimal.

Another advantage with Snowflake is its ability to scale either horizontally or vertically to pull in any amount of data. And since it is cloud native, Snowflake has embraced the movement toward ‘pay as you go’ – in fact, that’s their entire structure. You only pay for the ingestion time and when the data warehouse is running. After that, it shuts off, and so does your payment. Cost-effective implementation lets you experiment, compare, test, and iterate on the best way to migrate each piece of your data lifecycle.

Long Term Results

Snowflake has yielded successful data migrations with users because of its ease of use and absence of complications. Users also see performance improvements because they’re able to get their data faster than ever and they can grow with Snowflake, bringing in new and additional data sources and tools, taking advantage of artificial intelligence and machine learning, increasing automation, and experimenting and iterating.

From a security and governance perspective, Snowflake is strong. Snowflake enforces a multi-layer security structure, including user management. You can grant access to certain groups, organize them accordingly, integrate with your active directory, and have it run with those permissions. You assign an administrator to regulate specific accessibility for tables in specified areas. Snowflake also lets you choose your desired security level during implementation. You have the option of enterprise level, HIPAA compliance, and a maximum security level with a higher rate per second.

Do you want to explore data migration opportunities? Make the most of your data by partnering with trusted experts. We’re here to help you migrate, store, and utilize data to grow your business and streamline operations. If you’re ready to the next step in your data journey, Contact Us.

Learn more about 2nd Watch Data and Analytics services

-Sam Tawfik, Sr Product Marketing Manager, Data & Analytics

Facebooktwitterlinkedinmailrss