Amazon Redshift Stands Strong Despite Maintenance Challenges

AWS says Amazon Redshift is the world’s fastest cloud data warehouse, allowing customers to analyze petabytes of structured and semi-structured data at high speeds that allow for exploratory analysis. According to a 2018 Forrester report, Redshift is the most popular cloud data warehouse for enterprises.

To better understand how enterprises are using Redshift, 2nd Watch surveyed Redshift users at large companies. A majority of respondents (57%) said their Redshift implementation had delivered on corporate expectations, while another 26% said it had “somewhat” delivered.

With all the benefits Redshift enables, it’s no wonder tens of thousands of customers use it. Benefits like three times the performance of any cloud data warehouse or being 50% less expensive than all other cloud data warehouses make it an attractive service to Fortune 500 companies and startups alike, including McDonald’s, Lyft, Comcast, and Yelp, among others.

Overall Findings:

Despite its apparent success in the market, not all Redshift deployments have gone according to plan. 45% of respondents said queries stacking up in queues was a recurring problem in their Redshift deployment; 30% said some of their Data Analyst’s time was unproductive as a result of tuning Redshift queries; and 34% said queries were taking more than one minute to return results. Meanwhile, 33% said they were struggling to manage requests for permissions, and 25% said their Redshift costs were higher than anticipated.

Query and Queuing Learnings:

Queuing of queries is not a new problem. Redshift has a long-underutilized feature called Workload Management queues, or WLM. These queues are like different entrances to a baseball stadium. They all go to the same baseball game, but with different ways to get in. WLM queues divvy up compute and processing power among groups of users so no single “heavy” user ends up dominating the database and preventing others from accessing. It’s common to have queries stack up in the Default WLM queue. A better pattern is to have at least three or four different workload management queues:

  1. ETL processes
  2. Administration
  3. Ad hoc exploration
  4. Data loading and unloading

As for time lost due to performance tuning, this is a tradeoff with Redshift: it is inexpensive on the compute side but takes some care and attention on the human side. Redshift is extremely high-performing when designed and implemented correctly for your use case. It’s common for Redshift users to design tables at the beginning of a data load, then not return to the design until there is a problem, after other data sets enter the warehouse. It’s a best practice to routinely run ANALYZE and have auto-vacuum turned on, and to know how your most common queries are structured, so you can sort tables accordingly.

If queries are taking a long time to run, you need to ask whether the latency is due to the heavy processing needs of the query, or if the tables are designed inefficiently with respect to the query. For example, if a query aggregates sales by date, but the timestamp for sales is not a sort key, the query planner might have to traverse many different tables just to make sure it has all the right data, therefore taking a long time. On the other hand, if your data is already nicely sorted but you have to aggregate terabytes of data into a single value, then waiting a minute or more for data is not unusual.

Permissions

Some survey respondents mentioned that permissions were difficult to manage. There are several options for configuring access to Redshift. Some users create database users and groups internal to Redshift and manage authentication at the database level (for example, logging in via SQL Workbench). Others delegate permissions with an identity provider like Active Directory.

Implementation and Cost Savings

Enterprise IT directors are working to overcome their Redshift implementation challenges. 30% said they are rewriting queries, and 28% said they have compressed their data in S3 as part of a LakeHouse architecture. Query tuning was having the greatest impact on the performance of Redshift clusters.

When Redshift costs exceed the plan, it is a good practice to assess where the costs are coming from. Is it from storage, compute, or something else? Generally, if you are looking to save on Redshift spend, you should explore a LakeHouse architecture, which is a storage pattern that shifts data between S3 and your Redshift cluster. When you need lots of data for analysis, data is loaded into Redshift. When you don’t need that data anymore, it is moved back to S3 where storage is much cheaper. However, the tradeoff is that analysis is slower when data is in S3.

Another place to look for cost savings is in the instance size. It is possible to have over-provisioned your Redshift nodes. Look for metrics like CPU utilization; if it is consistently 25% or even 30% or lower, then you have too much headroom and might be over-provisioned.

Popular Features

Challenges aside, enterprise IT directors seem to love Redshift. The top four Redshift features, according to our survey, are query monitoring rules (cited by 44% of respondents), federated queries (35%) and custom-built ETL workflows (33%).

Query Monitoring Rules are custom rules that track bad or slow queries. Customers love Query Monitoring Rules because they are simple to write and give you great visibility into queries that will disrupt operations. You can choose obvious metrics like query_execution_time, or more subtle things like query_blocks_read, which would be a proxy for how much searching the query planner has to do to get data. Customers like these features because the reporting is central, and it frees them from having to manually check queries themselves.

Federated queries allow you to bring in live, external data to join with your internal Redshift data. You can query, for example, an RDS instance in the same SQL statement as a query against your Redshift cluster. This allows for dynamic and powerful analysis that normally would take many time-consuming steps to get the data in the same place.

Finally, custom-built ETL workflows have become popular for several reasons. One, the sheer compute power sitting in Redshift makes it a very popular source for compute resources. Unused compute can be used for ongoing ETL. You would have to pay for this compute whether or not you use it. Two, and this is an interesting twist, Redshift has become a popular ETL tool because of its capabilities in processing SQL statements. Yes, ETL written in SQL has become popular, especially for complicated transformations and joins that would be cumbersome to write in Python, Scala, or Java.

Conclusion

Redshift’s place in the enterprise IT stack seems secure, though how IT departments use the solution will likely change over time – significantly, perhaps. The reason for persisting in all the maintenance tasks listed above, is that Redshift is increasingly becoming the centerpiece for a data-driven analytics program. Data volume is not shrinking; it is always growing. If you take advantage of these performance features, you will make the most of your Redshift cluster and therefore your analytics program.

Download the infographic on our survey findings.

-Rob Whelan, Data Engineering & Analytics Practice Director

 

Facebooktwitterlinkedinmailrss

You’re on AWS, now what? Five things you should consider now.

You migrated your applications to AWS for a reason. Maybe it was for the unlimited scalability, powerful computing capability, ease and flexibility of deployment, movement from CapEx to OpEx model, or maybe it was simply because the boss told you to. However you got there, you’re there. So, what’s next? How do you take advantage of your applications and data that reside in AWS? What should you be thinking about in terms of security and compliance? Here are 5 things you should consider in order to amplify the value of being on AWS:

  1. Create competitive advantage from your AWS data
  2. Accelerate application development
  3. Increase the security of your AWS environment
  4. Ensure cloud compliance
  5. Reduce cloud spend without reducing application deployment

Create competitive advantage from your data

You have a wealth of information in the form of your AWS datasets. Finding patterns and insights not just within these datasets, but across all datasets is key to using data analysis to your advantage. You need a modern, cloud-native data lake.

Data lakes, though, can be difficult to implement and require specialized, focused knowledge of data architecture. Utilizing a cloud expert can help you architect and deploy a data lake geared toward your specific business needs, whether it’s making better-informed decisions, speeding up a process, reducing costs or something else altogether.

Download this datasheet to learn more about transforming your data analytics processes into a flexible, scalable data lake.

Accelerate application development

If you arrived at AWS to take advantage of the rapid deployment of infrastructure to support development, you understand the power of bringing applications to market faster. Now may be the time to fully immerse your company in a DevOps transformation.

A DevOps Transformation involves adopting a set of cultural values and organizational practices that improve business outcomes by increasing collaboration and feedback between business stakeholders, Development, QA, IT Operations, and Security. This includes an evolution of your company culture, automation and tooling, processes, collaboration, measurement systems, and organizational structure—in short, things that cannot be accomplished through automation alone.

To learn more about DevOps transformation, download this free eBook about the Misconceptions and Challenges of DevOps Transformation.

Increase the security of your AWS environment

How do you know if you’re AWS environment is truly secure? You don’t, unless you deploy a comprehensive security assessment of your AWS environment that measures your environment against the latest industry standards and best practices. This type of review provides a list of vulnerabilities and actionable remediations, an evaluation of your Incident Response Policy, and a comprehensive consultation of the system issues that are causing these vulnerabilities.

To learn more, review this Cloud Security Rapid Review document and learn how to gain protection from immediate threats.

Ensure cloud compliance

Deploying and managing cloud infrastructure requires new skills, software and management to maintain regulatory compliances within your organization. Without the proper governance in place, organizations can be exposed to security vulnerabilities and potentially compromise confidential information.

A partner like 2nd Watch can be a great resource in this area. The 2nd Watch Compliance Assessment and Remediation service is designed to evaluate, monitor, auto-remediate, and report on compliance of your cloud infrastructure, assessing industry standard policies including CIS, GDPR, HIPAA, NIST, PCI-DSS, and SOC2.

Download this datasheet to learn more about our Compliance Assessment & Remediation service.

Reduce cloud spend without reducing application deployment

Need to get control of your cloud spend without reducing the value that cloud brings to your business? This is a common discussion we have with clients. To reduce your cloud spend without decreasing the benefits of your cloud environment, we recommend examining the Pillars of Cloud Cost Optimization to prevent over-expenditure and wasted investment. The pillars include:

  • Auto-parking and on-demand services
  • Cost models
  • Rightsizing
  • Instance family / VM type refresh
  • Addressing waste
  • Shadow IT

For organizations that incorporate cloud cost optimization into their cloud infrastructure management, significant savings can be found, especially in larger organizations with considerable cloud spend.

Download our A Holistic Approach to Cloud Cost Optimization eBook to learn more.

After you’ve migrated to AWS, the next logical step in ensuring IT satisfies corporate business objectives is knowing what’s next for your organization in the cloud. Moving to the cloud was the right decision then and can remain the right decision going forward. Implement any of the five recommendations and accelerate your organization forward.

-Michael Elliott, Sr Director of Product Marketing

Facebooktwitterlinkedinmailrss

What to Ask Yourself When Considering VMware Cloud on AWS

Deciding on the best cloud strategy for your business can be overwhelming, especially if you’re new to the cloud. If you’re considering VMware Cloud on AWS (VMC on AWS), ask yourself these questions to find out if it’s the best solution for your needs.

1. Is it cost-effective for your business?

VMware is a premium brand and if you’re just looking at the compute cost, it may seem out of budget. To get an accurate comparison, you need to evaluate the compute cost against the expenses incurred in an on-prem environment – real estate, line pull, hardware, software maintenance, headcount, management, upgrades, and travel costs. Because it can be difficult to estimate these operational costs ahead of implementation, VMware provides some tools to help.

  • Production Pricing Calculator: Post a roadmap of the features you need in the cloud, along with workload sizing to get a cost calculation, or post-sizing calculation, that includes software overhead.
  • Operations Manager in VMware: Get a granular estimate of the cost for a sub-segment of your workload using this VMware management tool. Best for larger organizations where workload has a bigger impact on costs.
  • Network Insight in VMware: Another VMware management tool, Network Insight tracks traffic flow, something often neglected when comparing on-prem and cloud costs.

2. Do you use proof of concept environments?

Proof of concept (POC) environments let you evaluate a product in your architecture and demonstrate its capabilities. As opposed to POCs on hardware when someone has to unrack the hardware, unplug it, find the original box it came in, and ship it once you’ve completed your trial, closing a POC with VMC on AWS takes as few as three clicks. This might not seem like a big deal, but it’s a huge time and resource saver for technicians. Additionally, it makes everyone more willing to try new products, ensuring your environment is best equipped for your business.

3. Do you want to add hosts easily?

Adding hosts to your environment increases computing and storage capacity. With a datacenter, you buy hardware based on an estimation of capacity alongside your budget. After getting a quote and a purchase order, it can take six months to get your hardware. Then you need to rack and stack it and depend on the datacenter guys to give you a report. Over the next three to five years, you amortize the cost of the hardware and your effort.

With VMC on AWS, you input how many hosts you want and nine minutes later, an additional host is added to the cluster. When you no longer need the host, you can turn it off and only be billed for the time it was used. The quick control over your storage needs keep costs low, productivity high, and resource use optimized.

4. Do you need disaster recovery?

Using VMC for disaster recovery (DR) is becoming more popular with larger companies and those needing virtual desktop infrastructure (VDI), failover, and burst capability. This allows you to get started on VMC without it being heavily utilized until you’re ready.

Smaller companies considering DR on VMC need to consider the size versus cost model to determine what’s best for them. If you’re doing a business continuity case using VMC as a pilot light, then you can layer on Site Recovery Manager (SRM), VMware’s DR solution, very easily. In fact, you may be able to use VMC on AWS for more than just DR, including cloud strategy, business continuity or the pilot light, and potentially bursting capability for your on-prem. When you can rely on one solution for multiple purposes, you save time and resources through simplicity and standardization.

5. Do you just want it to work?

Professionals outside of tech have one simple goal – they just need this stuff to run reliably. They need a solution that allows them to focus on their responsibilities, rather than navigating issues, set-up, and dealing with other distractions.

One of the best things about VMC on AWS is the hands-off, ‘set it and forget it’ capability. The hardware and the upgrades are no longer your concern. There’s no need to spend so much money, time, and effort reinventing the wheel. It’s the bill versus pay model and it can put a lot of people in your organization at ease.

Building your cloud strategy, determining what products to use, and creating the architecture is all unique to your individual company. Our VMware Cloud experts can help you navigate your options for the best long-term results. Contact Us to take the next step in your cloud journey.

Facebooktwitterlinkedinmailrss

Improved Performance and Disaster Recovery with VMware Cloud on AWS

Even though public cloud adoption has become mainstream among enterprises, the heavily touted full cloud adoption has not become a reality for many companies, nor will it for quite some time.  Instead we see greater adoption of hybrid cloud, a mixture of public and private clouds, as the predominant deployment of IT servicesWith private cloud deployments largely consisting of market share leader, VMware, it gives even more credence to a VMware Cloud on AWS solution. 

Looking back 2 years to when VMware and AWS made the announcement that they had co-engineered a cloud solution, it makes a lot more sense, now.   That wasn’t necessarily always the case.  I’ll be among the first to admit that I failed to see how the two competitive solutions would coexist in a way that provided value to the customer.  But then again, I was fully drinking the cloud punch that said refactoring applications and deploying in a “cattle vs pets” mentality was necessary to enable a full-on digital transformation to merely survive in the evolving aaS world. 

What I was not considering was that more than 75% of private clouds were running on VMware.  Or that companies had made a significant investment into not only the licensing and tooling, but also in their people, to run VMware.  It would not have made sense to move everything to the cloud in many situations. 

I viewed it solely as a “lift and shift” opportunity.  It provided a means for companies to move their IT infrastructure out of the data center and “check the box” for fully migrating to the cloud while allowing for the gradual adoption of AWS cloud native solutions as they trained staff accordingly.   

While it is true that performing a complete data center evacuation is a common request with various factors influencing the decision, delaying cloud native is less of a driver.  Some companies are making the decision because they have been unsuccessful in renegotiating their contract with their colo-provider and find themselves in a tough situation resulting in the need to rapidly move or be locked-in for another lengthy contract.   In other situations, the CIO has decided that their valuable human capital would be better spent delivering higher value to their company as opposed to running a data center and converting from a CAPEX to OPEX model for their IT infrastructure works better for their business. 

However, there are two use cases that seem to be bigger drivers of VMware Cloud on AWS; the need for improved performance and disaster recovery.   

Aside from on-demand access to infrastructure, another big advantage of AWS is the sheer number of solutions they have created that become available to use in a matter of minutes and can be easily connected to your applications residing on VMware VMs. With VMware Hybrid Cloud Extension (HCX), moving applications between on-Premises VMware deployments and VMware Cloud on AWS deployments is seamless.  This allows your VMs be closer to the dependent AWS tooling to improve latency and may result in improved performance for your users. If you have a geographically disbursed user base, you can easily set up a VMware Cluster in a region much closer, further reducing latency.   

I do want to caution, though, that prior to performing a migration of your applications to VMware Cloud on AWS, you should create a dependency map of all your VMs in your on-premises environment.  It is necessary to have a thorough understanding of what other VMs your applications are communicating with.  We have seen numerous cases where proper identification of dependencies has not occurred, resulting is dissatisfaction when the application is moved to VMware Cloud on AWS but the SAP database remains on-premises.  So, while you may have brought the application closer to your users, performance could be impacted if the dependencies are not located nearby. 

The other use case that has been gaining adoption is the ability to have a disaster recovery environment.  With the severity of natural disasters occurring at what seems like an increased rate, there is a real threat that your business could be impacted with downtime.  VMware Cloud on AWS coupled with VMware Site Recovery Manager provides you an opportunity to put in place a business continuity plan in geographically diverse regions to help ensure that your business keeps running. 

The other exciting thing is that hybrid cloud no longer has to be located outside your data center.  VMware Cloud on AWS has gained such wide spread acceptance that, at AWS re:Invent 2019, VMware announced the opening of a VMware Cloud on AWS Outposts Beta program, which brings the popular features of AWS Cloud right into your data center to work alongside VMware.  This seems like it would be best for clients who need the benefits of VMware Cloud on AWS but have some data sovereignty issues or legacy applications that simply cannot migrate to off premise VMware Cloud. 

As one of only a handful of North American VMware Partners to possess the VMware Master Services Competency in VMware Cloud on AWS, 2nd Watch has performed numerous successful VMware Cloud on AWS Implementations.  We also support AWS Outposts, helping AWS customers overcome challenges that exist due to managing and supporting infrastructures both on-premises and in cloud environments, for a truly consistent hybrid experience.

If you want to understand how VMware Cloud on AWS can further enable your hybrid cloud adoption, schedule a VMware Cloud on AWS Workshop – a 4-hour, complimentary, on-site overview of VMware Cloud on AWS and appropriate use cases – to see if it is right for your business.  

-Dusty Simoni, Sr Product Manager

Facebooktwitterlinkedinmailrss

AWS re:Invent 2019: AWS Product/Service Review, a Networking Perspective

Announcements for days!

AWS re:Invent 2019 has come and gone, and now the collective audience has to sort through the massive list of AWS announcements released at the event.  According to the AWS re:Invent 2019 Recap communication, AWS released 77 products, features and services in just 5 days!  Many of the announcements were in the Machine Learning (ML) space (20 total), closely followed by announcements around Compute (16 total), Analytics (6 total), Networking and Content Delivery (5 total), and AWS Partner Network (5 total), amongst others.   In the area of ML, things like AWS DeepComposer, Amazon SageMaker Studio, and Amazon Fraud Detector topped the list.  While in the Compute, Analytics, and Networking space, Amazon EC2 Inf1 Instances, AWS Local Zones, AWS Outposts, Amazon Redshift Data lake, AWS Transit Gateway Network Manager, and Inter-Region Peering were at the forefront. Here at 2nd Watch we love the cutting-edge ML feature announcements like everyone else, but we always have our eye on those announcements that key-in on what our customers need now – announcements that can have an immediate benefit for our customers in their ongoing cloud journey.

All About the Network

In Matt Lehwess’ presentation, Advanced VPC design and new capabilities for Amazon VPC, he kicked off the discussion with a poignant note of, “Networking is the foundation of everything, it’s how you build things on AWS, you start with an Amazon VPC and build up from there. Networking is really what underpins everything we do in AWS.  All the services rely on Networking.” This statement strikes a chord here at 2nd Watch as we have seen that sentiment in action. Over the last couple years, our customers have been accelerating the use of VPCs, and, as of 2018, Amazon VPCs is the number one AWS service used by our customers, with 100% of them using it. We look for that same trend to continue as 2019 comes to an end.  It’s not the sexiest part of AWS, but networking provides the foundation that brings all of the other services together.  So, focusing on newer and more efficient networking tools and architectures to get services to communicate is always at the top of the list when we look at new announcements.  Here are our takes on these key announcements.

AWS Transit Gateway Inter-Region Peering (Multi-Region)

One exciting feature announcement in the networking space is Inter-Region Peering for AWS Transit Gateway.  This feature allows the ability to establish peering connections between Transit Gateways in different AWS Regions.  Previously, connectivity between two Transit Gateways could only be done through a Transit VPC which included the overhead of running your own networking devices as part of the Transit VPC.   Inter-Region peering for AWS Transit Gateway enables you to remove the Transit VPC and connect Transit Gateways directly.

The solution uses a new static attachment type called a Transit Gateway Peering Attachment that, once created, requires an acceptance or rejection from the accepter Transit Gateway.  In the future, AWS will likely allow dynamic attachments, so they advise you to create unique ASNs for each Transit Gateway for the easiest transition.  The solution also uses encrypted VPC peering across the AWS backbone.  Currently Transit Gateway inter-region peering support is available for gateways in US East (Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions with support for other regions coming soon.  You also can’t peer Transit Gateways in the same region.

(Source: Matt Lehwess: Advanced VPC design and new capabilities for Amazon VPC (NET305))

On the surface the ability to connect two Transit Gateways is just an incremental additional feature, but when you start to think of the different use cases as well as the follow-on announcement of Multi-Region Transit Gateway peering and Accelerated VPN solutions, the options for architecture really open up.  This effectively enables you to create a private and highly-performant global network on top of the AWS backbone.  Great stuff!

AWS Transit Gateway Network Manager

This new feature is used to centrally monitor your global network across AWS and on premises. The Transit Gateway network manager simplifies operational complexity of managing networks across regions and remote locations.  This AWS feature is another to take a dashboard approach to provide a simpler overview of your resources that may be spread over several regions and accounts. To use it, you create a Global Network within the tool which is an object in the AWS Transit Gateway Network Manager service that represents your private global network in AWS. It includes your AWS Transit Gateway hubs, their attachments, and on-premises devices, sites, and links.  Once the Global Network is created, you extend the configuration by adding in Transit Gateways, information about your on-premises devices, sites, links, and the Site-to-Site VPN connections with which they are associated, and start using it to visualize and monitor your network. It includes a nice geographic world map view to visualize VPNs (if they’re up/down impaired) or Transit Gateway Peering connections.

https://d1.awsstatic.com/re19/gix/gorgraphic.cdb99cd59ba34015eccc4ce5eb4b657fdf5d9dd6.png

There’s also a nice Topology feature that shows VPCs, VPNs, Direct Connect gateways, and AWS Transit Gateway-AWS Transit Gateway peering for all registered Transit gateways.  It provides an easier way to understand your entire global infrastructure from a single view.

Another key feature is the integration with SD-WAN providers like Cisco, Aviatrix, and others. Many of these solutions will integrate with AWS Transit Gateway Network Manager and automate the branch-cloud connectivity and provide end-to-end monitoring of the global network from a single dashboard. It’s something we look forward to exploring with these SD-WAN providers in the future.

AWS Local Zones

AWS Local Zones in an interesting new service that addresses challenges we’ve encountered with customers.  Although listed under Compute and not Networking and Content Delivery on the re:Invent 2019 announcement list, Local Zones is a powerful new feature with networking at its core.

Latency tolerance for applications stacks running in a hybrid scenario (i.e. app servers in AWS, database on-prem) is a standard conversation when planning a migration.  Historically, those conversations would be predicated by their proximity to an AWS region.  Depending on requirements, customers in Portland, Oregon may have the option to run a hybrid application stack, where those in Southern California may have been excluded.  The announcement of Local Zones (initially just in Los Angeles) opens up those options to markets that were not previously available.  I hope this is the first of many localized resource deployments.

That’s no Region…that’s a Local Zone

Local Zones are interesting in that they only have a subset of the services available in a standard region.  Local Zones are organized as a child of a parent region, notably the Los Angeles Local Zone is a child of the Oregon Region.  API communication is done through Oregon, and even the name of the LA Local Zone AZ maps to Oregon (Oregon AZ1= us-west-2a, Los Angeles AZ1 = us-west-2-lax-1a).  Organizationally, it’s easiest to think of them as remote Availability Zones of existing regions.

As of December 2019, only a limited amount of services are available, including EC2, EBS, FSx, ALB, VPC and single-zone RDS.  Pricing seems to be roughly 20% higher than in the parent region.  Given that this is the first Local Zone, we don’t know whether this will always be true or if it depends on location.  One would assume that Los Angeles would be a higher-cost location whether it was a Local Zone or full region.

All the Things

To see all of the things that were launched at re:Invent 2019 you can check out the re:Invent 2019 Announcement Page. For all AWS announcements, not just re:Invent 2019 launches (e.g. Things that launched just prior to re:Invent), check out the What’s New with AWS webpage. If you missed the show completely or just want to re-watch your favorite AWS presenters, you can see many of the re:Invent presentations on the AWS Events Youtube Channel. After you’ve done all that research and watched all those videos and are ready to get started, you can always reach out to us at 2nd Watch. We’d love to help!

-Derek Baltazar, Managing Consultant

-Travis Greenstreet, Principal Architect

Facebooktwitterlinkedinmailrss