3 Options for Getting Started with a Modern Data Warehouse

In previous blog posts, we laid out the benefits of a modern data warehouse, explored the different types of modern data warehouses available, and discussed where a modern data warehouse fits in your overall data architecture.

Download Now: Modern Data Warehouse Comparison Guide [Snowflake, Amazon Redshift, Azure Synapse, and Google BigQuery]

There is no such thing as a one-size-fits-all data warehouse. To that end, there is no singular approach to getting started. Getting started depends on your goals, needs, and where you are today. In this blog post, we’ll outline a few options 2nd Watch offers for getting started with a modern data warehouse and the details for each.

  • Option 1: Data Architecture Whiteboard Session
  • Option 2: Modern Data Warehouse Strategy Session
  • Option 3: Modern Data Warehouse Quickstart

Option 1: 60-Minute Data Architecture Assessment

A 60-minute data architecture assessment is a great option to see how a modern data warehouse would fit in your current environment and what would be involved to get from where you are now to where you want to be.

During this session, we will outline a plan to achieve your goals and help you understand the tools, technologies, timeline, and cost to get there.

Who is this for? Organizations in the very early planning stages

Duration: 60 minutes

More Information

Option 2: Modern Data Warehouse Strategy

In order to see ROI and business value from your modern data warehouse, you must have a clear plan on how you are going to use it. During a modern data warehouse strategy project, our team will work with your stakeholders to understand your business goals and design a tech strategy to ensure business value and ROI from your data environment.

Who is this for? Organizations in the early planning stages looking to establish the business use case, cost benefits, and ROI of a modern data warehouse before getting started

Duration: 2-, 4-, 6-, or 8-week strategies are available

More Information

Option 3: Modern Data Platform Quickstart

You have your strategy laid out and are ready to get started ASAP. The modern data platform quickstart is a great option to get your modern data warehouse up and running in as few as six weeks.

During this quickstart, we’ll create a scalable data warehouse; clean, normalize, and ingest data; and even provide reports for predefined use cases.

Who is this for? Organizations that have outlined their strategy and are ready to start seeing the benefits of a modern data warehouse

Duration: 6 weeks

More Information

Not sure where to begin? We recommend beginning with a 60-minute data architecture assessment. This session allows us to walk through your current architecture, understand your organization’s pain points and goals for analytics, brainstorm on a future state architecture based on your goals, and then come up with next steps. Furthermore, the assessment allows us to determine if your organization needs to make a change, what those changes are, and how you might go about implementing them. Simply put, we want to understand the current state, learn about the future state of what you want to build toward, and help you create a plan so you can successfully execute on a modern data warehouse project.

A Word of Warning

Modern data warehouses are a big step forward from traditional on-premise architectures. They allow organizations to innovate quicker and provide value to the business much faster. An organization has many options in the cloud and many vendors offer a cloud data warehouse, but be careful: building a modern data warehouse architecture is highly involved and may require multiple technologies to get you to the finish line.

The most important thing to do when embarking on a modern data warehouse initiative is to have an experienced partner to guide you through the process the right way from establishing why a cloud data warehouse is important to your organization to outlining what the future state vision should be to develop a plan to get you there.

Data warehouse architecture is changing, don’t fall behind your competition! With multiple options for getting started, there is no reason to wait.

We hope you found this information valuable. If you have any questions or would like to learn more, please contact us and we’ll schedule a time to connect.


3 Data Integration Best Practices Every Successful Business Adopts

Here’s a hypothetical situation: Your leadership team is on a conference call, and the topic of conversation turns to operational reports. The head of each line of business (LOB) presents a conflicting set of insights, but each one is convinced that the findings from their analytics platform are the gospel truth. With data segregated across the LOBs, there’s no clear way to determine which insights are correct or make an informed, unbiased decision.

What Do You Do?

In our experience, the best course of action is to create a single source of truth for all enterprise analytics. Organizations that do so achieve greater data consistency and quality data sources, increasing the accuracy of their insights – no matter who is conducting analysis. Since the average organization draws from 400 different data sources (and one in five needs to integrate more than 1,000 disparate data sources), it’s no surprise that many organizations struggle to integrate their data. Yet with these data integration best practices, you’ll find fewer challenges as you create a golden source of insight.

Take a Holistic Approach

The complexity of different data sources and niche analytical needs within the average organization makes it difficult for many to hone in on their master plan for data integration. As a result, there are plenty of instances in which the tail ends up wagging the dog.

Maybe it’s an LOB with greater data maturity pushing for an analytics layer that aligns with their existing analytics platform to the detriment of others. Or maybe the organization is familiar with a particular stack or solution and is trying to force the resulting data warehouse to match those source schema. Whatever the reason, a non-comprehensive approach to data integration will hamstring your reporting.

In our experience, organizations see the best results when they design their reporting capabilities around their desired insight – not a specific technology. Take our collaboration with a higher education business. They knew from the outset that they wanted to use their data to convert qualified prospects into more enrollees. They trusted us with the logistics of consolidating their more than 90 disparate data sources (from a variety of business units across more than 10 managed institutions) into reports that helped them analyze the student journey and improve their enrollment rate as a whole.

With their vision in mind, we used an Alooma data pipeline to move the data to the target cloud data warehouse, where we transformed the data into a unified format. From there, we created dashboards that allowed users to obtain clear and actionable insight from queries capable of impacting the larger business. By working toward an analytical goal rather than conforming to their patchwork of source systems, we helped our client lay the groundwork to increase qualified student applications, reduce the time from inquiry to enrollment, and even increase student satisfaction.

Win Quickly with a Manageable Scope

When people hear the phrase “single source of truth” in relation to their data, they imagine their data repository needs to enter the world fully formed with an enterprise-wide scope. For mid-to-large organizations, that end-to-end data integration process can take months (if not years) before they receive any direct ROI from their actions.

One particular client of ours entered the engagement with that boil-the-ocean mentality. A previous vendor had proposed a three-year timeline, suggesting a data integration strategy that would:

  • Map their data ecosystem
  • Integrate disparate data sources into a centralized hub
  • Create dashboards for essential reporting
  • Implement advanced analytics and data science capabilities

Though we didn’t necessarily disagree with the projected capability, the waiting period before they experienced any ROI undercut the potential value. Instead, we’re planning out a quick win for their business, focusing on a mission-critical component that can provide a rapid ROI. From there, we will scale up the breadth of their target data system and the depth of their analytics.

This approach has two added benefits. One, you can test the functionality and accessibility of your data system in real time, making enhancements and adjustments before you expand to the enterprise level. Two, you can develop a strong and clear use case early in the process, lowering the difficulty bar as you try to obtain buy-in from the rest of the leadership team.

Identify Your Data Champion

The shift from dispersed data silos to a centralized data system is not a turnkey process. Your organization is undergoing a monumental change. As a result, you need a champion within the organization to foster the type of data-driven culture that ensures your single source of truth lives up to the comprehensiveness and accuracy you expect.

What does a data champion do? They act as an advocate for your new data-driven paradigm. They communicate the value of your centralized data system to different stakeholders and end users, encouraging them to transition from older systems to more efficient dashboards. Plus, they motivate users across departments and LOBs to follow data quality best practices that maintain the accuracy of insights enterprise wide.

It’s not essential that this person be a technical expert. This person needs to be passionate and build trust with members of the team, showcasing the new possibilities capable through your data integration solution. All of the technical elements of data integration or navigating your ELT/ETL tool can be handled by a trusted partner like 2nd Watch.

Schedule a whiteboard session with our team to discuss your goals, source systems, and data integration solutions.


What Is MDM and Should You Consider an MDM Solution?

Master data management, commonly called MDM, is an increasingly hot topic. You may have heard the term thrown around and been wondering, “What is MDM?” and, “Does my business need it?” We’re sharing a crash course in master data management to cover those questions, and we may even answer some you haven’t thought of asking yet.

What is MDM?

Master data management allows for successful downstream analytics as well as synchronization of data to systems across your business. The process involves three major steps:

  1. Ingest all relevant data in a repository.
  2. Use an MDM tool (such as Riversand or Semarchy) to “goldenize” the data. In other words, create one current, complete, and accurate record.
  3. Send the goldenized data downstream for analytics and back to the original source.

Let’s say one part of your business stores customer data in Oracle and a different area has customer data in Salesforce. And maybe you’re acquiring a business that stores customer data in HubSpot in some instances. You want to be able to access all of this information, understand what you’re accessing, and then analyze the data to help you make better business decisions. This is when you would turn to master data management.

Do we need an MDM solution?

This chart is a bit of an oversimplification but serves as a starting point. If you want to better understand how master data management could impact your business and ensure your MDM solution is customized to your needs, you can work with data and analytics consulting firm like 2nd Watch.

 

When should I get help implementing an MDM solution?

An MDM company will be skilled at creating the “golden record” of your data. (Reminder: This is one current, complete, and accurate record.) However, they typically lack the ability to provide guidance going forward.

A data and analytics consulting firm like 2nd Watch will take a broad view of the current state of your business and its individual entities, while also considering where you want your organization to go. We partner with leading master data management providers like Riversand to ensure you get the best possible MDM solution. We can then guide the MDM implementation and set you up for next steps like creating reports and developing a data governance strategy.

Having an advocate for your future state goals during an master data management implementation is particularly important because there are four types of MDM architecture styles: registry, consolidation, coexistence, and centralized. You don’t need to understand the nuances of these master data management styles; that’s our job. 2nd Watch’s MDM experience has helped us develop recommendations on which style works best for different needs, so we can quickly generate value but also follow data best practices.

Even more important is how an master data management solution evolves over time. As your business changes, your data will change with it. 2nd Watch can develop an MDM solution that keeps up with your needs, moving through the progressively complex styles of MDM architecture as your organization grows and expands.

If you think an MDM solution might be right for your organization, contact us to learn more about how we can help.


4 Reasons Why A Technology Agnostic Partner Is Key To Choosing The Right BI Tool

With all the data tools and analytics platforms on the market, it’s easy to select a tool based on a great demo, a beautiful interface, and the promise of a single solution. Before you sign the contract, you may want to consider these four reasons to consult with a technology agnostic partner to find the best analytics solution.

To start, let’s define what I mean by tech agnostic partner: A tech agnostic partner adds value by not anchoring themselves to a single data tool or analytics platform. This allows for a more strategic, flexible, and long-term solution that is not reliant on any one specific tool.

Reason 1: A technology agnostic partner can focus on the big picture strategy and work with many partners.

For obvious reasons, BI vendors tend to sell their product as a silver bullet. Regardless of your company’s issues, the solution will usually involve their product at the center. For example, a vendor may advise you to store and manage your data within their tool without any back-end data warehouse. While this may seem like an easy solution, it can limit the ways you can access that data and integrate with other applications. Tableau, Qlik, Birst, or Looker are great BI tools but are not always the best tools to integrate your data together. In other words, you wouldn’t hire a glassmaker to build your house, but if you did, I bet it would have a lot of fancy windows.

“You wouldn’t hire a glassmaker to build your house, but if you did, I bet it would have a lot of fancy windows.”

Because our focus is delivering a full platform, 2nd Watch recommends tools based on how well they pair with the current and future state of your data. For example, if your firm has invested in an on-premise structure focused on batch processing, we might recommend an SSIS or Informatica ETL tool. However, if your goal is to perform real-time analysis in the cloud, we might point you towards a data pipeline tool such as Alooma or FiveTran. The emphasis is on matching your strategy with the tool’s competitive advantage, not the other way around. Moreover, selecting your tools based on your use case can actually save you money and add functionality.

Reason 2: A technology agnostic partner can develop solutions based on your priorities.

Along with focusing on your strategy, we also work with your priorities in terms of budget, scope, and schedule. 2nd Watch can honestly assess how much time and money is required to reach your goal and work to build solutions that match your ambitions.

A popular (and important) sales tactic is to emphasize the ROI of using a tool. The major con of this approach is that many sales teams ignore the critical steps that make that investment possible. Except in straightforward use cases, pointing a tool at a heap of data will not magically create viable results. 2nd Watch understands the function of these different tools on the market and are experts at identifying their true value proposition.

Reason 3: A technology agnostic partner can be as tech-forward or backward-compatible as needed.

There is a huge spectrum across the market for most data functions today. From cheap/open-source software to enterprise-ready options, companies have more opportunities to pivot as the technology matures. By avoiding vendor lock-in, firms can take advantage of cost-savings and functionality of competitive products without making expensive modifications.

2nd Watch’s core business is implementing a data architecture that is flexible enough to both accommodate your existing technology and prepare your firm for the future.

Reason 4: A technology agnostic partner can build a team of problem solvers, not code monkeys.

2nd Watch has worked with several tools across multiple stacks (e.g., Azure, AWS, open-source, on-remise). This has given us a much greater breadth of understanding across the different technologies, but more importantly, has allowed us to focus on the problem we’re trying to solve, rather than just implementing the tool. It is not enough to have experience with a specific tool; a team should be able to adjust to new technologies as the market matures.

For example, Snowflake entered the market in 2014. There are literally no developers with even five years of experience, and yet it is currently the best-performing data warehouse on the market. [Note: This blog post was originally published in 2018.] Who can you trust to implement? As we do with many of our technology partners, we’ve worked closely with Snowflake to get our team trained and certified in this technology. As a result, our consultants can focus on understanding the underlying function of a service and maintaining a strong technical base (rather than learning on the fly). This investment in our team is why we are a top Snowflake partner in Chicago (brag). It’s because we’re strategically tech agnostic that 2nd Watch has the flexibility to translate our expertise in data architecture into cutting edge projects for our customers.

So here is the shocking conclusion: Partner with a company that is technology agnostic. Just like hiring any service provider, find an experienced team that is incentivized purely by the success of your project and not by trying to fit the proverbial square peg into a round hole.

Looking for more data and analytics insights? Download our eBook, “Advanced Data Insights: An End-to-End Guide for Digital Analytics Transformation.”

 


2020 Predictions: Multicloud

Multicloud has risen to the fore in 2019 as customers continue to migrate to the cloud and build out a variety of cloud environments.

When it comes to multicloud, it offers obvious benefits of not being locked in with a single provider, as well as being able to try varying platforms. But how far have customers actually gotten when it comes to operating multicloud environments? And what does 2020 hold for the strategy?

Adoption

As 2020 approaches and datacenter leases expire, we can expect to see continued cloud adoption with the big public cloud players – Amazon and Azure in particular. Whether a move to a multicloud environment is in the cards or whether that may be a step too far for firms that are already nervous about shifting from a hosted datacenter to the public cloud is a question cloud providers are eager to get answers to.

But there isn’t a simple answer, of course.

We have to remember that with a multicloud solution, there has to be a way to migrate or move workloads between the clouds, and one of the hurdles multicloud adoption is going to face in 2020 is organizations not yet having the knowledge base when it comes to different cloud platforms.

What we may well see is firms taking that first step and turning to VMware or Kubernetes – an opensource container orchestration platform – as a means to overlay native cloud services in order to adopt multicloud strategies. At VMworld in August, the vendor demonstrated VMs being migrated between Azure and AWS, something users can start to become familiar with in order to build their knowledge of cloud migrations and, therefore, multicloud environments.

For multicloud in 2020 this means not so much adoption, but awareness and investigation. Those organizations using an overlay like VMware to operate a multicloud environment can do so without having deep cloud expertise and sophistication in-house. This may be where multicloud takes off in 2020. Organizations wouldn’t necessarily need to know (or care) how to get between their clouds, they would have the ability to bounce between Azure, Amazon and Google Cloud via their VMware instead.

Still, as we’re moving into a multicloud world and companies start to gravitate towards a multicloud model, they’re going to see that there are multiple ways to utilize it. They will want to understand it and investigate it further, which will naturally lead to questions as to how it can serve their business. And at the moment, the biggest limiter is not having this in-house knowledge to give organizations that direction. Most firms don’t yet have one single person that knows Amazon or Azure at a sophisticated enough level to comfortably answer questions about the individual platforms, let alone how they can operate together in a multicloud environment.

What this means is that customers do a lot of outsourcing when it comes to managing their cloud environment, particularly in areas like PaaS, IaaS, Salesforce and so on. As a result, organizations are starting to understand how they can use these cloud technologies for their internal company processes, and they’re asking, ‘Why can’t we use the rest of the cloud as well, not just for this?’ This will push firms to start investigating multicloud more in 2020 and beyond – because they will realize they’re already operating elements of a multicloud environment and their service providers can advise them on how to build on that.

Adoption steps

For firms thinking about adopting a multicloud environment – even those who may not feel ready yet – it’s a great idea to start exploring a minimum of two cloud providers. This will help organizations get a feel for the interface and services, which will lead to an understanding of how a multicloud environment can serve their business and which direction to go in.

It’s also a good idea to check out demos of the VMware or Kubernetes platforms to see where they might fit in.

And lastly, engage early with Amazon, Azure and VMware or a premier partner like 2nd Watch. Companies seeking a move to the cloud are potentially missing out on monies set aside for migration assistance and adoption.

What will 2020 bring?

2020 is certainly set to see multicloud questions being asked, but it’s likely that hybrid cloud will be more prevalent than multicloud. Why? Because customers are still trying to decide if they want to get into cloud rather than think about how they can utilize multiple clouds in their environment. They just aren’t there yet.

As customers still contemplate this move to the cloud, it’s much more likely that they will consider a partial move – the hybrid cloud – to begin with, as it gives them the comfort of knowing they still hold some of their data on-premise, while they get used to the idea of the public cloud. This is especially true of customers in highly regulated industries, such as finance and healthcare.

What does this mean for multicloud? A wait. The natural step forward from hybrid cloud is multicloud, but providers will need to accept that it’s going to take time and we’re simply not quite there yet, nor will we be in 2020.

But we will be on the way – well on the way – as customers take a step further along the logical path to a multicloud future. 2020 may not be the year of multicloud, but it will be the start of a pretty short journey there.

-Jason Major, Principal Cloud Consultant

-Michael Moore, Associate Cloud Consultant


The Cloudcast Podcast with Jeff Aden, Co-Founder and EVP at 2nd Watch

The Cloudcast’s Aaron and Brian talk with Jeff Aden, Co-Founder and EVP at 2nd Watch, about the evolution of 2nd Watch as a Cloud Integrator as AWS has grown and shifted its focus from startups to enterprise customers. Listen to the podcast at http://www.thecloudcast.net/2019/02/evolution-of-public-cloud-integrator.html.

Topic 1 – Welcome to the show Jeff. Tell us about your background, the founding of 2nd Watch, and how the company has evolved over the last few years.

Topic 2 – We got to know 2nd Watch at one of the first AWS re:Invent shows, as they had one of the largest booths on the floor. At the time, they were listed as one of AWS’s best partners. Today, 2nd Watch provides management tools, migration tools, and systems-integration capabilities. How does 2nd Watch think of themselves?

Topic 3 –  What are the concerns of your customers today, and how does 2nd Watch think about matching customer demands and the types of tools/services/capabilities that you provide today?

Topic 4 – We’d like to pick your brain about the usage and insights you’re seeing from your customers’ usage of AWS. It’s mentioned that 100% are using DynamoDB, 53% are using Elastic Kubernetes, and a fast growing section is using things likes Athena, Glue and Sagemaker. What are some of the types of applications that you’re seeing customer build that leverage these new models? 

Topic 5 – With technologies like Outpost being announced, after so many years of AWS saying “Cloud or legacy Data Center,” how do you see this impacting the thought process of customers or potential customers?


Migrating to Terraform v0.10.x

When it comes to managing cloud-based resources, it’s hard to find a better tool than Hashicorp’s Terraform. Terraform is an ‘infrastructure as code’ application, marrying configuration files with backing APIs to provide a nearly seamless layer over your various cloud environments. It allows you to declaratively define your environments and their resources through a process that is structured, controlled, and collaborative.

One key advantage Terraform provides over other tools (like AWS CloudFormation) is having a rapid development and release cycle fueled by the open source community. This has some major benefits: features and bug fixes are readily available, new products from resource providers are quickly incorporated, and you’re able to submit your own changes to fulfill your own requirements.

Hashicorp recently released v0.10.0 of Terraform, introducing some fundamental changes in the application’s architecture and functionality. We’ll review the three most notable of these changes and how to incorporate them into your existing Terraform projects when migrating to Terraform v.0.10.x.

  1. Terraform Providers are no longer distributed as part of the main Terraform distribution
  2. New auto-approve flag for terraform apply
  3. Existing terraform env commands replaced by terraform workspace

A brief note on Terraform versions:

Even though Terraform uses a style of semantic versioning, their ‘minor’ versions should be treated as ‘major’ versions.

1. Terraform Providers are no longer distributed as part of the main Terraform distribution

The biggest change in this version is the removal of provider code from the core Terraform application.

Terraform Providers are responsible for understanding API interactions and exposing resources for a particular platform (AWS, Azure, etc). They know how to initialize and call their applications or CLIs, handle authentication and errors, and convert HCL into the appropriate underlying API calls.

It was a logical move to split the providers out into their own distributions. The core Terraform application can now add features and release bug fixes at a faster pace, new providers can be added without affecting the existing core application, and new features can be incorporated and released to existing providers without as much effort. Having split providers also allows you to update your provider distribution and access new resources without necessarily needing to update Terraform itself. One downside of this change is that you have to keep up to date with features, issues, and releases of more projects.

The provider repos can be accessed via the Terraform Providers organization in GitHub. For example, the AWS provider can be found here.

Custom Providers

An extremely valuable side-effect of having separate Terraform Providers is the ability to create your own, custom providers. A custom provider allows you to specify new or modified attributes for existing resources in existing providers, add new or unsupported resources in existing providers, or generate your own resources for your own platform or application.

You can find more information on creating a custom provider from the Terraform Provider Plugin documentation.

1.1 Configuration

The nicest part of this change is that it doesn’t really require any additional modifications to your existing Terraform code if you were already using a Provider block.

If you don’t already have a provider block defined, you can find their configurations from the Terraform Providers documentation.

You simply need to call the terraform init command before you can perform any other action. If you fail to do so, you’ll receive an error informing you of the required actions (img 1a).

After successfully reinitializing your project, you will be provided with the list of providers that were installed as well as the versions requested (img 1b).

You’ll notice that Terraform suggests versions for the providers we are using – this is because we did not specify any specific versions of our providers in code. Since providers are now independently released entities, we have to tell Terraform what code it should download and use to run our project.

(Image 1a: Notice of required reinitialization)

 

 

 

 

 

 

 

 

(Image 1b: Response from successful reinitialization)

 

 

 

 

 

 

 

 

Providers are released separately from Terraform itself, and maintain their own version numbers.

You can specify the version(s) you want to target in your existing provider blocks by adding the version property (code block 1). These versions should follow the semantic versioning specification (similar to node’s package.json or python’s requirements.txt).

For production use, it is recommended to limit the acceptable provider versions to ensure that new versions with breaking changes are not automatically installed.

(Code Block 1: Provider Config)

provider "aws" {
  version = "0.1.4"
  allowed_account_ids = ["1234567890"]
  region = "us-west-2"
}

 (Image 1c: Currently defined provider configuration)

 

 

 

 

 

 

 

 

2. New auto-approve flag for terraform apply

In previous versions, running terraform apply would immediately apply any changes between your project and saved state.

Your normal workflow would likely be:
run terraform plan followed by terraform apply and hope nothing changed in between.

This version introduced a new auto-approve flag which will control the behavior of terraform apply.

Deprecation Notice

This flag is set to true to maintain backwards compatibility, but will quickly change to false in the near future.

2.1 auto-approve=true (current default)

When set to true, terraform apply will work like it has in previous versions.

If you want to maintain this functionality, you should upgrade your scripts, build systems, etc now as this default value will change in a future Terraform release.

(Code Block 2: Apply with default behavior)

# Apply changes immediately without plan file
terraform apply --auto-approve=true

2.2 auto-approve=false

When set to false, Terraform will present the user with the execution plan and pause for interactive confirmation (img 2a).

If the user provides any response other than yes, terraform will exit without applying any changes.

If the user confirms the execution plan with a yes response, Terraform will then apply the planned changes (and only those changes).

If you are trying to automate your Terraform scripts, you might want to consider producing a plan file for review, then providing explicit approval to apply the changes from the plan file.

(Code Block 3: Apply plan with explicit approval)

# Create Plan
terraform plan -out=tfplan

# Apply approved plan
terraform apply tfplan --auto-approve=true

(Image 2a: Terraform apply with execution plan)

 

 

 

 

 

 

3. Existing terraform env commands replaced by terraform workspace

The terraform env family of commands were replaced with terraform workspace to help alleviate some confusion in functionality. Workspaces are very useful, and can do much more than just split up environment state (which they aren’t necessarily used for). I recommend checking them out and seeing if they can improve your projects.

There is not much to do here other than switch the command invocation, but the previous commands still currently work for now (but are deprecated).

 

License Warning

You are using an UNLICENSED copy of Scroll Office.

Do you find Scroll Office useful?
Consider purchasing it today: https://www.k15t.com/software/scroll-office

 

— Steve Byerly, Principal SDE (IV), Cloud, 2nd Watch


The Most Popular AWS Products of 2016

We know from the past 5 years of Gartner Magic Quadrants that AWS is a leader among IaaS vendors, placing the furthest for ‘completeness of vision’ and ‘ability to execute.’ AWS’ rapid pace of innovation contributes to its position as the leader in the space. The cloud provider releases hundreds of product and service updates every year. So, which of those are the most popular amongst our enterprise clients?

We analyzed data from our customers for the year, from a combined 100,000+ instances running monthly. The most popular AWS products and services, represented by the percentage of 2nd Watch customers utilizing them in 2016, include Amazon’s two core services for compute and storage – EC2 and S3 – and Amazon Data Transfer, each at 100% usage. Other high-ranking products include Simple Queue Service (SQS) for message queuing (84%) and Amazon Relational Database Service or RDS (72%). Usage for these services remains fairly consistent, and we would expect to see these services across most AWS deployments.

There are some relatively new AWS products and services that made the “most-popular” list for 2016 as well. AWS Lambda serverless computing (38%), Amazon WorkSpaces, a secure virtual desktop service (27%), and Kinesis, a real-time streaming data platform (12%), are quickly being adopted by AWS users and rising in popularity.

The fas-growing services in 2016, based on CAGR, include AWS CloudTrail (48%), Kinesis (30%), Config for resource inventory, configuration history, and change notifications (24%), Elasticsearch Service for real-time search and analytics (22%), Elastic MapReduce, a tool for big data processing and analysis, (20%) and Redshift, the data warehouse service alternative to systems from HP, Oracle and IBM (14%).

The accelerated use of these products demonstrates how quickly new cloud technologies are becoming the standard in today’s evolving market. Enterprises are moving away from legacy systems to cloud platforms for everything from back-end systems to business-critical, consumer-facing assets. We expect growth in each of these categories to continue as large organizations realize the benefits and ease of using these technologies.

Download the 30 Most Popular AWS Products infographic to find out which others are in high-demand.

-Jeff Aden, Co-Founder & EVP Business Development & Marketing