6 Cloud Trends from 2020 Continuing in 2021

At the end of summer 2020, 2nd Watch surveyed over 100 cloud-focused IT directors about their cloud use. Now in 2021, we’re looking back at the 2020 Enterprise Cloud Trends Report to highlight six situations to anticipate going forward. As you would expect, COVID-19 vaccination availability, loosening of restrictions, and personal comfort levels continue to be an influential focus of cloud growth, and a significant factor in the acceleration of digital transformation.

1. Remote work adoption

The forced experiment of work from home, work from anywhere, and remote work in general has proven effective for many organizations. Employees are happy with the flexibility, and many businesses are enjoying increased productivity, a larger talent pool, and significant cost savings. While just about half (46%) of survey respondents said more than 50% of their employees were working remote in summer 2020, that number is expected to grow 14%. Rather than pulling back on remote work enablement, 60% of companies say almost 60% of employees will work away from the office.

2. Remote work challenges

It’s anticipated that as the number of remote workers grows, so do the challenges of managing the distributed work environment. Remote access, specifically into a corporate system, is the highest ranked challenge by survey respondents. Other issues include the capacity of conferencing and collaboration tools, and user competence. The complexities of both managing remote workers, and being a remote worker continue to evolve.

Business and IT leaders are constantly having to revisit the cloud infrastructure established in 2020, to provide flexibility, access, and business continuity during 2021 and beyond.

3. Cloud services and business collaboration

The cloud services market is maintaining their COVID-19 growth spurt, but the relationship between provider and client is shifting. The digital transformation and increasing reliance on cloud-based services is creating a new level of engagement and desire for collaboration from businesses. Organizations want to work alongside cloud providers and service providers so they can upskill along the way, and for the future. Businesses are using providers to build their cloud foundation and establish some pipelines – particularly around data migration – and in effect, learning on the job. As the business gets more involved in each project, and continues to build skills and evolve their DevOps culture, they can ultimately reduce their dependence, and associated costs, on cloud partners.

4. Growing cloud budgets

Surviving and thriving organizations have been, and continue to, position themselves for the long haul. Just over 64% of survey respondents said their cloud budgets had either remained the same, or increased. And almost 60% say their cloud budget will grow over the next 12 months.

Many are utilizing this time to gain competitive advantage by improving mobile app development, customer experience, and operations. The expectation of a payback period has businesses focused on boosting ROI using cloud-based services. 2020 forced business leaders to re-adjust how they see IT within their organization. It’s no longer a cost center, but something that can propel and enable the company forward.

5. Cloud security and data governance

As everyone moved out of the office in 2020, hackers took notice. Since then, ransomware attacks have been steadily increasing and there’s no signs of slowing down. The majority of survey respondents, 75%, agreed that cloud security and data governance are their number one concern and priority.

High profile breaches and remote work risks are grabbing headlines, causing organizations to question their security posture, and the tools necessary to prevent incidents. The role of proactive AI in cloud security is enabling faster response times and higher visibility into recovery and prevention. While tools are getting better, threats are also getting bigger.

6. Optimism

Overall, the majority of respondents are leaning in to today’s circumstances and have a positive perspective on the future. With many organizations responding by accelerating cloud use to support the current environment, the most successful are also thinking ahead. Increasing cloud budgets, fostering external cloud collaboration for skill growth, and relying on the cloud to support remote employees showcases how the business landscape has changed. Data is more critical than ever as organizations accelerate toward the digital transformation – and the future looks bright.

Stay tuned for the 2021 Enterprise Cloud Trends Report for a focus on IT departments, see what security improvements have been made, and how organizations are continuing to use and support remote environments. Until then, if you’re moving applications to the cloud, modernizing your licensed database systems, and optimizing cloud use, let 2nd Watch help! With premier services across cloud platforms, and as a trusted cloud advisor, we embrace your unique journey. Contact Us to take the next step.

Nicole Maus – Director of Marketing

Facebooktwitterlinkedinmailrss

3 Reasons Businesses Are Using Google Cloud Platform for AI

Google Cloud Platform (GCP) offers a wide scope of artificial intelligence (AI) and machine learning (ML) services fit for a range of industries and use cases. With more businesses turning to AI for data-based innovation and new solutions, GCP services are proving effective. See why so many organizations are choosing Google Cloud to motivate, manage, and make change easy.

1. Experimentation and Cost Savings

Critical to the success of AI and ML models are data scientists. The more you enable, empower, and support your data scientists through the AI lifecycle, the more accurate and reliable your models will be. Key to any successful new strategy is flexibility and cost management. Oneway GCP reduces costs while offering enterprise flexibility is with Google’s AI Platform Notebooks.

Managed JuptyerLab notebook instances give data scientists functional flexibility – including access to BigQuery, with the ability to add CPUs, RAM, and GPUs to scale – cloud security, and data access with a streamlined experience from data to deployment. Relying on on-prem environments, data scientists are limited by resource availability and a variety of costs related data warehousing infrastructure, hosting, security, storage, and other expenses. JuptyerLab notebooks and Big Query, on the other hand, are pay as you go and always available via the AI Platform Notebooks. With cost-effective experimentation, you avoid over provisioning, only pay for what you use and when you run, and give data scientists powerful tools to get data solutions fast.

2. Access and Applications

AI and ML projects are only possible after unifying data. A common challenge to accomplishing this first step are data silos across the organization. These pockets of disjointed data across departments threaten the reliability and business outcomes of data-based decision making. The GCP platform is built on a foundation of integration and collaboration, giving teams the necessary tools and expansive services to gain new data insights for greater impacts.

For instance, GCP enables more than just data scientists to take advantage of their AI services, databases, and tools. Developers without data science experience can utilize APIs to incorporate ML into the solution without ever needing to build a model. Even others, who don’t have knowledge around data science, can create custom models that integrate into applications and websites using Cloud AutoML.

Additionally, BigQuery Omni, a new service from GCP, enables compatibility across platforms. BigQuery Omni enables you to query data residing in other places using standard SQL with the powerful engine of BigQuery. This innovation furthers your ability to join data quickly and without additional expertise for unobstructed applicability.

3. ML Training and Labs

Google enables users with best practices for cost-efficiency and performance. Through its Quiklabs platform, you get free, temporary access to GCP and AWS, to learn the cloud on the real thing, rather than simulations. Google also offers training courses ranging from 30-minute individual sessions, to multi-day sessions. The courses are built for introductory users, all the way up to expert level, and are instructor-led or self-paced. Thousands of topics are covered, including AI and ML, security, infrastructure, app dev, and many more.

With educational resources at their fingertips, data teams can roll up their sleeves, dive in, and find some sample data sets and labs, and experience the potential of GCP hands-on. Having the ability to experiment with labs without running up a bill – because it is in a sandbox environment – makes the actual implementation, training, and verification process faster, easier, and cost-effective. There is no danger of accidentally leaving a BigQuery system up and running, executing over and over, with a huge cost to the business.

Next Steps

If you’re contemplating AL and ML on Google Cloud Platform, get started with Quiklabs to see what’s possible. Whether you’re the one cheerleading AI and ML in your organization or the one everyone is seeking buy-in from, Quiklabs can help. See what’s possible on the platform before going full force on a strategy. Google is constantly adding new services and tools, so partner with experts you can trust to achieve the business transformation you’re expecting.

Contact 2nd Watch, a Google Cloud Partner with over 10 years of cloud experience, to discuss your use cases, level of complexity, and our advanced suite of capabilities with a cloud advisor.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Sam Tawfik, Sr Product Marketing Manager

Facebooktwitterlinkedinmailrss

3 Types of Employees That Can Use AI Offerings on Google Cloud

The Google Cloud Platform (GCP) comes with a number of services, databases, and tools to operationalize company-wide data management and analytics. With the insights and accessibility provided, you can leverage data into artificial intelligence (AI) and machine learning (ML) projects cost-efficiently. GCP empowers employees to apply their ideas and experience into data-based solutions and innovation for business growth. Here’s how.

1. Developers without data science experience

With GCP, developers can connect their software engineering experience with AI capabilities to produce powerful results. Using product APIs, developers can incorporate ML into the product without ever having to build a model.

Let’s take training videos for example – Your company has thousands of training videos varying in length and across subjects. They include everything from full-day trainings on BigQuery, to minutes-long security trainings. How do you operationalize all that information for employees to quickly find exactly what they want?

Using Google’s Cloud Video Intelligence API, the developer can transcribe not only every single video, word-for-word, but also document the start and end time of every word, in every video. The developer builds a search index on top of the API, and just like that, users can search specific content in thousands of videos. Results display both the relevant videos and timestamps within the videos, where the keyword is found. Now employees can immediately find the topic they want to learn more about, without needing to sift through what could be hours of unrelated information.

Additional APIs include, Cloud Natural Language, Speech-to-Text, Text-to-Speech, Cloud Data Loss Prevention, and many others in ML.

2. Everyone without data science experience, who isn’t a developer

Cloud AutoML enables your less technical employees to harness the power of machine learning. It bridges the gap between the API and building your own ML model. Using AutoML, anyone can create custom models tailored to your business needs, and then integrate those models into applications and websites.

For this example, let’s say you’re a global organization who needs to translate communications across dialects and business domains. The intricacies and complexities of natural language require expensive linguists and specialist translators with domain-specific expertise. How do you communicate in real time effectively, respectfully, and cost-efficiently?

With AutoML Translation, almost anyone can create translation models that return query results specific to your domain, in 50 different language pairs. It graphically ingests your data from any type of Sheet or CSV file. The input data necessary is pairs of sentences that mean the same thing in both the language you want to translate from, and the one you want to translate to. Google goes the extra mile between generic translation and specific, niche vocabularies with an added layer of specificity to help the model get the right translation for domain-specific material. Within an hour, the model translates based on your domain, taxonomy, and the data you provided.

Cloud AutoML is available for platform, sight, structured data, and additional language capabilities.

3. Data scientists

Data scientists have the experience and data knowledge to take full advantage of GCP AI tools for ML. One of the issues data scientists often confront is notebook functionality and accessibility. Whether its TensorFlow, PyTorch, or JupyterLab, these open source ML platforms require too many resources to run on a local computer, or easily connect to BigQuery.

Google AI Platform Notebooks is a managed service that provides a pre-configured environment to support these popular data science libraries. From a security standpoint, AI Platform Notebooks is attractive to enterprises for the added security of the cloud. Relying on a local device, you run the risk of human error, theft, and fatal accidents. Equipped with a hosted, integrated, secure, and protected JupyterLab environment, data scientists can do the following:

  • Virtualize in the cloud
  • Connect to GCP tools and services, including BigQuery
  • Develop new models
  • Access existing models
  • Customize instances
  • Use Git / GitHub
  • Add CPUs, RAM, and GPUs to scale
  • Deploy models into production
  • Backup machines

With a seamless experience from data to a deployed ML model, data scientists are empowered to work faster, smarter, and safer. Contact Us to further your organization’s ability to maximize data, AI, and ML.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Sam Tawfik, Sr Product Marketing Manager

Facebooktwitterlinkedinmailrss

Maximizing Cloud Data with Google Cloud Platform Services

If you’re trying to run your business smarter, not harder, utilizing data to gain insights into decision making gives you a competitive advantage. Cloud data offerings empower utilization of data in the cloud, and the Google Cloud Platform (GCP) is full of options. Whether you’re migrating data, upgrading to enterprise-class databases, or transforming customer experience on cloud-native databases – Google Cloud services can fit your needs.

Highlighting some of what Google has to offer

With so many data offerings from GCP, it’s nearly impossible to summarize them all. Some are open source projects being distributed by other vendors, while others were organically created by Google to service their own needs before being externalized to customers. A few of the most popular and widely used include the following.

  • BigQuery: Core to GCP, this serverless, scalable, multi-cloud, data warehouse enables business agility – including data manipulation and data transformation, and it is the engine for AI, machine learning (ML), and forecasting.
  • Cloud SQL: Traditional relational database in the cloud that reduces maintenance costs with fully managed services for MySQL, PostgreSQL, and SQL Server.
  • Spanner: Another fully managed relational database offering unlimited scale, consistency, and almost 100% availability – ideal for supply chain and inventory management across regions and between two databases.
  • Bigtable: Low latency, NoSQL, fully managed database for ML and forecasting, using very large amounts of data in analytical and operational workloads.
  • Data Fusion: Fully managed, cloud-native data integration tool that enables you to move different data sources to different targets – includes over 150 preconfigured connectors and transformers.
  • Firestore: From the Firebase world comes the next generation of Datastore. This cloud-native, NoSQL, document database lets you develop custom apps that directly connect to the database in real-time.
  • Cloud Storage: Object based storage can be considered a database because of all the things you can do with BigQuery – including using standard SQL language to query objects in storage.

Why BigQuery?

After more than 10 years of development, BigQuery has become a foundational data management tool for thousands of businesses. With a large ecosystem of integration partners and a powerful engine that shards queries across petabytes of data and delivers a response in seconds, there are many reasons BigQuery has stood the test of time. It’s more than just super speed, data availability, and insights.

Standard SQL language
If you know SQL, you know BigQuery. As a fully managed platform, it’s easy to learn and use. Simply populate the data and that’s it! You can also bring in large public datasets to experiment and further learn within the platform.

Front-end data
If you don’t have Looker, Tableau, or another type of business intelligence (BI) tool to visualize dashboards off of BigQuery, you can use the software development kit (SDK) for web-based front-end data display. For example, government health agencies can show the public real-time COVID-19 case numbers as they’re being reported. The ecosystem of BigQuery is so broad that it’s a source of truth for your reports, dashboards, and external data representations.

Analogous across offerings

Coming from on-prem, you may be pulling data into multiple platforms – BigQuery being one of them. GCP offerings have a similar interface and easy navigation, so functionality, user experience, and even endpoint verbs are the same. Easily manage different types of data based on the platforms and tools that deliver the most value.

BigQuery Omni

One of the latest GCP services was built with a similar API and platform console to various other platforms. The compatibility enables you to query data living in other places using standard SQL. With BigQuery Omni, you can connect and combine data from outside GCP without having to learn a new language.

Ready for the next step in your cloud journey?

As a Google Cloud Partner, 2nd Watch is here to be your trusted cloud advisor throughout your cloud data journey, empowering you to fuel business growth while reducing cloud complexity. Whether you’re embracing cloud data for the first time or finding new opportunities and solutions with AI, ML, and data science our team of data scientists can help. Contact Us for a targeted consultation and explore our full suite of advanced capabilities.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Sam Tawfik, Sr Product Marketing Manager

Facebooktwitterlinkedinmailrss

App Modernization in the Cloud

The cloud market is maturing, and organizations worldwide are well into implementing their cloud strategies. In fact, a recent McKinsey survey estimates that, by 2022, 75% all workloads will be running in either public or private clouds. Additionally, according to VMWare, 72% of businesses are looking for a path forward for their existing applications, and it is important to consider an app modernization strategy as part of these migration efforts. Whether it be a desire to containerize, utilize cloud-native services, increase agility, or realize cost savings, the overall goal should be to deliver business value faster in the rapidly changing cloud environment.

Modern Application

Application modernization has a focus on legacy or “incumbent” line of business applications, and approaches range anywhere between re-hosting from the datacenter to cloud, to full cloud native application rewrites. We prefer to take a pragmatic approach, which is to address issues with legacy applications that hinder organizations from realizing the benefits of modern software and cloud native approaches, while retaining as much of the intellectual property that has been built into incumbent applications over the years as possible. Additionally, we find ways of augmenting existing code bases to make use of modern paradigms.

Application Modernization Strategies

When approaching legacy software architecture, people often discuss breaking apart monolithic applications and microservices. However, the most important architectural decisions should be centered around how to best allow the application to function well in the cloud, with scalability, fault-tolerance, and observability all being important aspects. A popular approach is to consider the tenants of the 12-Factor App to help guide these decisions.

Architecture discussions go hand in hand with considering platforms. Containerization and serverless functions are popular approaches, but equally valid is traditional VM clustering or even self-hosting. Additionally, we start to think about utilizing cloud services to offload some application complexity, such as AWS S3 for document storage or AWS KMS for key management. This leads us to consider different cloud providers themselves for best fit for the organization and the applications overall, whether it be AWS, Azure, GCP (Google cloud platform), or hybrid-cloud solutions.

Another very important aspect of application modernization, especially in the cloud, is ensuring that applications have proper automation. Strong continuous integration and continuous deployment (CI/CD) pipelines should be implemented or enhanced for legacy applications. Additionally, we apply CD/CI automation for deploying database migrations and performing infrastructure-as-code (IaaC) updates, and ensure paradigms like immutable infrastructure (i.e. pre-packaging machine images or utilizing containerization) are utilized.

Last, there is an important cultural aspect to modernization from an organizational to team level. Organizations must consider modernization a part of their overall cloud strategy and support their development teams in this area. Development teams must adapt to new paradigms to understand and best utilize the cloud – adopting strong DevOps practices and reorganizing teams along business objectives instead of technology objectives is key.

By implementing a solid modernization strategy, businesses can realize the benefits the cloud provides, deliver value to their customers more rapidly, and compete in a rapidly changing cloud environment. If you’re ready to implement a modernization strategy in your organization, contact us for guidance on how to get started. Learn more about application modernization here.

– James Connell, Sr Cloud Consultant

Facebooktwitterlinkedinmailrss

What is App Modernization?

Application modernization is the process of migrating an incumbent or legacy software application to modern development patterns, paradigms and platforms with the explicit purpose of improving business value. It’s a part of your entire application modernization strategy and implies improving the software architecture, application infrastructure, development techniques and business strategy using a cloud native approach. Essentially, it allows you to derive increased business value from your existing application code.

Modernizing software architecture is often described as splitting a monolithic codebase but can imply any improvements to the software itself, such as decoupling of components or addressing tech debt in the codebase. Other examples might be finding new design patters that allow for scale, addressing resiliency within an application or improving observability through logs and tracing.

We often think of application modernization in the context of cloud, and when planning a migration to cloud or modernizing an application already in the cloud, we look at what services and platforms are beneficial to the effort. Utilizing a service such as Amazon S3 for serving documents instead of a network share or utilizing ElasticSearch instead of the database for search are examples of infrastructure improvements. Containerization and other serverless platforms are also considered.

Development techniques also need to be addressed in the context of modernization. Developers should focus on the parts of the application that deliver value to customers and provide competitive advantage. If developers are focused on maintenance, long manual deployments, bugs, and log investigation, they are unable to deliver value quickly.

When working with modern distributed cloud applications, teams need to follow strong DevOps practices in order to be successful. CI/CD, unit testing, diagnostics and alerting are all areas that development teams can focus on modernizing.

Legacy Application and Legacy Systems

In this context, legacy software refers to an incumbent application or system that blocks or slows an organization’s ability to accomplish its business goals. These systems still provide value and are great candidates for modernization.

Legacy can imply many things, but some common characteristics of legacy apps are:

  • Applications that run older libraries, outdated frameworks, or development platforms or operating systems that are no longer supported.
  • Architectural issues – monolithic or tightly coupled systems can lead to difficulties in deployment, long release cycles and high defect rates.
  • Large amounts of technical debt, dead or unused code, teams who no longer understand how older parts of the application work, etc.
  • Security issues caused by technical debt, outdated security paradigms, unpatched operating systems, and improper secret management.
  • Lack instrumentation with no way to observe the application.
  • Maintain session state on the client (require sticky sessions, etc.).
  • Manually deployed or must be deployed in specific ways due to tight coupling.

 Pillars of Application Modernization

When approaching a modernization project, we specifically look to ensure the following:

Flexible Architecture

The modernization initiative should follow a distributed computing approach, meaning it should take advantage of concepts such as elasticity, resiliency, and containerization. Converting applications to adhere to the principals of the “12-factor app” in order to take advantage of containerization is a prime example.

Automation

The application must be built, tested and deployed using modern CI/CD processes. Older source control paradigms such as RCS or SVN should be replaced with distributed version control systems (git). Infrastructure as code should be included as part of the CI/CD system.

Observability

Holistically integrate logs, metrics, and events enabling “the power to ask new questions of your system, without having to ship new code or gather new data in order to ask those new questions” (Charity Majors https://www.honeycomb.io/blog/observability-a-manifesto).  Observability is key to understanding performance, error rates, and communication patterns and enables the ability to measure your system and establish baselines.

Culture

Application teams should be aligned along business function, not technology, meaning multi-disciplinary teams that can handle operations (DevOps), database, testing (QA) and development. A culture of ownership is important in a cloud-native application.

App Modernization Examples

Application Modernization is not:

  • Just containerization – To take full advantage of containerization, applications must be properly architected (12-factor), instrumented for observability and deployed using CI/CD.
  • Just technical solutions adapting the latest framework or technology – The technology might be “modern” in a sense but doesn’t necessarily address cultural or legacy architectural issues.
  • Just addressing TCO – Addressing cost savings without addressing legacy issues does not constitute modernization.
  • Just running a workload in the cloud
  • Just changing database platforms – Licensing issues or the desire to move to open source clustered cloud databases does not equate to modernization.
  • Limited to a specific programming languages or specific cloud providers as a hybrid cloud approach can be deployed.

Application modernization includes, among others, combinations of:

  • Moving a SaaS application from a single to multi-tenant environment.
  • Breaking up a monolithic application into microservices.
  • Applying event driven architecture to decouple and separate concerns.
  • Utilizing cloud services such as S3 to replace in-house solutions.
  • Refactoring to use NoSQL technologies such as MongoDB, ElasticSearch, or Redis.
  • Containerization and utilization of PaaS technologies such as Kubernetes or Nomad.
  • Utilization of Serverless (FaaS) technologies such as AWS Lambda, Azure Functions, OpenFaas, or Kubeless.
  • Creating strong API abstractions like REST or gRPC and utilizing API Gateways.
  • Transitioning to client-side rendering frameworks (React, Vue.js, etc.) and serverless edge deployment of UI assets, removing the webserver.
  • Moving long running synchronous tasks to asynchronous batch processes.
  • Utilizing saga patterns or business process workflows.
  • Will ultimately focus on enhancing business applications, improving customer experience, and enable rapid digital transformation for the organization.

If you’re ready to start considering application modernization in your organization, contact us for guidance on how to get started.

-James Connell, Sr Cloud Consultant

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: 5 Strategies to Maximize Your Cloud’s Value – Create Competitive Advantage from your Data

AWS Data Expert, Saunak Chandra, joins today’s episode to break down the first of five strategies used to maximize your cloud’s value – creating competitive advantage from your data. We look at tactics including Amazon Redshift, RA3 node type, best practices for performance, data warehouses, and varying data structures. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.

Facebooktwitterlinkedinmailrss

Google Cloud, Open-Source and Enterprise Solutions

In 2020, a year where enterprises had to rethink their business models to stay alive, Google Cloud was able to grow 47% and capture market share. If you are not already looking at Google Cloud as part of your cloud strategy, you probably should.

Google has made conscious choices about not locking in customers with proprietary technology. Open-source technology has, for many years, been a core focus for Google, and many of Google Cloud’s solutions can integrate easily with other cloud providers.

Kubernetes (GKE), Knative (Cloud Functions), TensorFlow (Machine Learning), and Apache Beam (Data Pipelines) are some examples of cloud-agnostic tools that Google has open-sourced and which can be deployed to other clouds as well as on-premises, if you ever have a reason to do so.

Specifically, some of Google Cloud’s services and its go-to-market strategy set Google Cloud apart. Modern and scalable solutions like BigQuery, Looker, and Anthos fall into this category. They are best of class tools for each of their use cases, and if you are serious about your digital transformation efforts, you should evaluate their capabilities and understand what they can do for your business.

Three critical challenges we see from our enterprise clients here at 2nd Watch repeatedly include:

  1. How to get started with public cloud
  2. How to better leverage their data
  3. How to take advantage of multiple clouds

Let’s dive into each of these.

Foundation

Ask any architect if they would build a house without a foundation, and they would undisputedly tell you “No.” Unfortunately, many companies new to the cloud do precisely that. The most crucial step in preparing an enterprise to adopt a new cloud platform is to set up the foundation.

Future standards are dictated in the foundation, so building it incorrectly will cause unnecessary pain and suffering to your valuable engineering resources. The proper foundation, that includes your project structure aligned with your project lifecycle and environments, and a CI/CD pipeline to push infrastructure changes through code will enable your teams to become more agile while managing infrastructure in a modern way.

A foundation’s essential blocks include project structure, network segmentation, security, IAM, and logging. Google has a multi-cloud tool called Cloud Operations for logs management, reporting, and alerting, or you can ingest logs into existing tools or set up the brand of firewalls you’re most familiar and comfortable with from the Google Cloud Marketplace. Depending on your existing tools and industry regulations, compliance best practices might vary slightly, guiding you in one direction or another.

DataOps

Google has, since its inception, been an analytics powerhouse. The amount of data moving through Google’s global fiber network at any given time is incredible. Why does this matter to you? Google has now made some of its internal tools that manage large amounts of data available to you, enabling you to better leverage your data. BigQuery is one of these tools.

Being serverless, you can get started with BigQuery on a budget, and it can scale to petabytes of data without breaking a sweat. If you have managed data warehouses, you know that scaling them and keeping them performant is a task that is not easy. With BigQuery, it is.

Another valuable tool, Looker, makes visualizing your data easy. It enables departments to share a single source of truth, which breaks down data silos and enables collaboration between departments with dashboards and views for data science and business analysis.

Hybrid Cloud Solutions

Google Cloud offers several services for multi-cloud capabilities, but let’s focus on Anthos here. Anthos provides a way to run Kubernetes clusters on Google Cloud, AWS, Azure, on-premises, or even on the edge while maintaining a single pane of glass for deploying and managing your containerized applications.

With Anthos, you can deploy applications virtually anywhere and serve your users from the cloud datacenter nearest them, across all providers, or run apps at the edge – like at local franchise restaurants or oil drilling rigs – all with the familiar interfaces and APIs your development and operations teams know and love from Kubernetes.

Currently in preview, soon Google Cloud will release BigQuery Omni to the public. BigQuery Omni lets you extend the capabilities of BigQuery to the other major cloud providers. Behind the scenes, BigQuery Omni runs on top of Anthos and Google takes care of scaling and running the clusters, so you only have to worry about writing queries and analyzing data, regardless of where your data lives. For some enterprises that have already adopted BigQuery, this can mean a ton of cost savings in data transfer charges between clouds as your queries run where your data lives.

Google Cloud offers some unmatched open-source technology and solutions for enterprises you can leverage to gain competitive advantages. 2nd Watch has helped organizations overcome business challenges and meet objectives with similar technology, implementations, and strategies on all major cloud providers, and we would be happy to assist you in getting to the next level on Google Cloud.

2nd Watch is here to serve as your trusted cloud data and analytics advisor. When you’re ready to take the next step with your data, contact Us.

Learn more

Webinar: 6 Essential Tactics for your Data & Analytics Strategy

Webinar:  Building an ML foundation for Google BigQuery ML & Looker

-Aleksander Hansson, 2nd Watch Google Cloud Specialist

Facebooktwitterlinkedinmailrss

How to Federate Amazon Redshift Access with Azure Active Directory

Single sign-on (SSO) is a tool that solves fundamental problems, especially in midsize and large organizations with lots of users.

End users do not want to have to remember too many username and password combinations. IT administrators do not want to have to create and manage too many different login credentials across enterprise systems. It is a far more manageable and secure approach to federate access and authentication through a single identity provider (IdP).

As today’s enterprises rely on a wide range of cloud services and legacy systems, they have increasingly adopted SSO via an IdP as a best practice for IT management. All access and authentication essentially flows through the IdP wherever it is supported. Employees do not have to remember multiple usernames and passwords to access the tools they need to do their jobs. Just as importantly, IT teams prevent an administrative headache. They manage a single identity per user, which makes tasks like removing access when a person leaves the organization much simpler and less prone to error.

The same practice extends to AWS. As we see more customers migrate to the cloud platform, we hear a growing need for the ability to federate access to Amazon Redshift when they use it for their data warehouse needs.

Database administration used to be a more complex effort. Administrators had to figure out which groups a user belonged to, which objects a user or group were authorized to use, and other needs—in manual fashion. These user and group lists—and their permissions—were traditionally managed within the database itself, and there was often a lot of drift between the database and the company directory.

Amazon Redshift administrators face similar challenges if they opt to manage everything within Redshift itself. There is a better way, though. They can use an enterprise IdP to federate Redshift access, managing users and groups within the IdP and passing the credentials to Amazon Redshift at login.

We increasingly hear from our clients, “We use Azure Active Directory (AAD) for identity management—can we essentially bring it with us as our IdP to Amazon Redshift?”

They want to use AAD with Redshift the way they use it elsewhere, to manage their users and groups in a single place to reduce administrative complexity. With Redshift, specifically, they also want to be able to continue managing permissions for those groups in the data warehouse itself. The good news is you can do this and it can be very beneficial.

Without a solution like this, you would approach database administration in one of two alternative ways:

  1. You would provision and manage users using AWS Identity and Access Management (IAM). This means, however, you will have another identity provider to maintain—credentials, tokens, and the like—separate from an existing IdP like AAD.
  2. You would do all of this within Redshift itself, creating users (and their credentials) and groups and doing database-level management. But this creates similar challenges to legacy database management, and when you have thousands of users, it simply does not scale.

Our technical white paper outlines how to federate access to Amazon Redshift using Azure Active Directory as your IdP, passing user and group information through to the database at login.

Download the technical white paper

-Rob Whelan, Data & Analytics Practice Director

Facebooktwitterlinkedinmailrss

Cloud Crunch Podcast: You’re on the Cloud. Now What? 5 Strategies to Maximize Your Cloud’s Value

You migrated your applications to the cloud for a reason. Now that you’re there, what’s next? How do you take advantage of your applications and data that reside in the cloud? What should you be thinking about in terms of security and compliance? In this first episode of a 5-part series, we discuss 5 strategies you should consider to maximize the value of being on the cloud. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.

Facebooktwitterlinkedinmailrss