1-888-317-7920 info@2ndwatch.com

CCPA and the cloud

Since the EU introduced the General Data Protection Regulation (GDPR) in 2018, all eyes have been on the U.S. to see if it will follow suit. While a number of states have enacted data privacy statutes, California’s Consumer Protection Act (CCPA) is the most comprehensive U.S. state law to date. Entities were expected to be in compliance with CCPA as of January 1, 2020.

CCPA compliance requires entities to think about how the regulation will impact their cloud infrastructures and development of cloud-native applications. Specifically, companies must understand where personally identifiable information (PII) and other private data lives, and how to process, validate, complete, and communicate consumer information and consent requests.

What is CCPA and how to ensure compliance

CCPA gives California residents greater privacy rights their data that is collected by companies. It applies to any business with customers in California and that either has gross revenues over $25 million or that acquires personal information from more than 50,000 consumers per year. It also applies to companies that earn more than half their annual revenue selling consumers’ personal information.

In order to ensure compliance, the first thing firms should look at is whether they are collecting PII, and if they are, ensuring they know exactly where it is going. CCPA not only mandates that California consumers have the right to know what PII is being collected, it also states that customers can dictate whether it’s sold or deleted. Further, if a company suffers a security breach, California consumers have the right to sue that company under the state’s data notification law. This increases the potential liability for companies whose security is breached, especially if their security practices do not conform to industry standards.

Regulations regarding data privacy are proliferating and it is imperative that companies set up an infrastructure foundation which help them evolve fluidly with these changes to the legal landscape, as opposed to “frankensteining” their environments to play catch up. The first is data mapping in order to know where all consumer PII lives and, importantly, where California consumer PII lives. This requires geographic segmentation of the data. There are multiple tools, including cloud native ones, that empower companies with PII discovery and mapping. Secondly, organizations will need to have a data deletion mechanism in place and an audit trail for data requests, so that they can prove they have investigated, validated, and adequately responded requests made under CCPA. The validation piece is also crucial – companies must make sure the individual requesting the data is who they say they are.

And thirdly, having an opt-in or out system in place that allows consumers to consent to their data being collected in the first place is essential for any company doing business in California. If the website is targeted at children, there must be a specific opt-in request for any collection of California consumer date. These three steps must be followed with an audit trail that can validate each of them.

The cloud

It’s here that we start to consider the impact on cloud journeys and cloud-native apps, as this is where firms can start to leverage tools that that Amazon or Azure, for example, currently have, but that haven’t been integral for most businesses in a day-to-day context, until now. This includes AI learning tools for data discovery, which will help companies know exactly where PII lives, so that they may efficiently comply with data subject requests.

Likewise, cloud infrastructures should be set up so that firms aren’t playing catch up later on when data privacy and security legislation is enacted elsewhere. For example, encrypt everything, as well as making sure access control permissions are up to date. Organizations must also prevent configuration drift with tools that will automate closing up a security gap or port if one gets opened during development.

For application development teams, it’s vital to follow security best practices, such as CIS benchmarks, NIST standards and the OWASP Top Ten. These teams will be getting the brunt of the workload in terms of developing website opt-out mechanisms, for example, so they must follow best practices and be organized, prepared, and efficient.

The channel and the cloud

For channel partners, there are a number of considerations when it comes to CCPA and the cloud. For one, partners who are in the business of infrastructure consulting should know how the legislation affects their infrastructure and what tools are available to set up a client with an infrastructure that can handle the requests CCPA mandates.

This means having data discovery tools in place, which can be accomplished with both cloud native versions and third party software. Also, making sure notification mechanisms are in place, such as email, or if you’re on Amazon, SNS (Simple Notification Service). Notification mechanisms will help automate responding to data subject requests. Additionally, logging must be enabled to establish an audit trail. Consistent resource tagging and establishing global tagging policies is integral to data mapping and quickly finding data. There’s a lot from an infrastructure perspective that can be done, so firms should familiarize themselves with tools that can facilitate CCPA compliance that may have never been used in this fashion, or indeed at all.

Ultimately, when it comes to CCPA, don’t sleep on it. GDPR went into effect less than two years ago, and already we have seen huge fines doled out to the likes of British Airways and Google for compliance failures. The EU has been aggressive about ensuring compliance, and California is likely to follow the same game. They know that in order to give CCPA any teeth, they have to make sure that they prosecute it.

If you’re interested in learning more about how privacy laws might affect cloud development, watch our “CCPA: State Privacy Law Effects on Cloud Development” webinar on-demand, at your convenience.

– Victoria Geronimo, Product Manager – Security & Compliance

Facebooktwitterlinkedinmailrss

Amazon Forecast: Best Practices

In part one of this article, we offered an overview of Amazon Forecast and how to use it. In part two, we get into Amazon Forecast best practices:

Know your business goal

In our data and analytics practice, business value comes first. We want to know and clarify use cases before we talk about technology. Using amazon Forecast is no different. When creating a forecast, do you want to make sure you always have enough inventory on hand? Or do you want to make sure that all your inventory gets used all the time? This will drive which “quartile” you look at.

Each quartile – the defaults are 10%, 50%, and 90% – is important for its own reasons and should be looked at to give a range. What is the 50% quartile? The forecast at this quartile has a 50-50 chance of being right. The real number has a 50% chance of being higher and a 50% chance of being lower than the actual value. The forecast at the 90% quartile has a 90% chance of being higher than the actual value, while the forecast at the 10% quartile has only a 10% chance of being higher. So, if you want to make sure you sell all your inventory, use the 10% quartile forecast.

Use related time series

Amazon has made Forecast so easy to use with related time series, you really have nothing to lose to make your forecast more robust. All you have to do is make the time series time units the same as your target time series.

One way to create a related dataset is to use categorical or binary data whose future values are already known – for example, whether the future time is on a weekend or a holiday or there is a concert playing – anything that is on a schedule that you can rely on.

Even if you don’t know if something will happen, you can create multiple forecasts where you vary the future values. For example, if you want to forecast attendance at a baseball game this Sunday, and you want to model the impact of weather, you could create a feature is_raining and try one forecast with “yes, it’s raining” and another with “no, it’s not raining.”

Look at a range of forecasted values, not a singular forecasted value

Don’t expect the numbers to be precise. One of the biggest values from a forecast is knowing what the likely range of actual values will be. Then, take some time to analyze what drives that range. Can it be made smaller (more accurate) with more related data? If so, can you control any of that related data?

Visualize the results

Show historical and forecast values on one chart. This will give you a sense of how the forecast is trending. You can backfill the chart with actuals as they come in, so you can learn more about your forecast’s accuracy.

Choose a “medium term” time horizon

Your time horizon – how far in the future your forecast looks – is either 500 timesteps or ⅓ of your time series data, whichever is smaller. We recommend choosing up to a 10% horizon for starters. This will give you enough forward-looking forecasts to evaluate the usefulness of your results without taking too long.

Save your data prep code

Save the code you use to stage your data for the forecast for the future. Because you will be doing this again, you don’t want to repeat yourself. An efficient way to do this is to use PySpark code inside a Sagemaker notebook. If you end up using your forecast in production, you will eventually place that code into a Glue ETL pipeline (using PySpark), so it is best to just use PySpark out of the box.

Another advantage of using PySpark is that the utilities to load and drop csv-formatted data to/from S3 are dead simple. You will be using CSV for Forecasting work.

Interpret the results!

The guide to interpret results is here, but admittedly it is a little dense if you are not a statistician. One easy metric to look at, especially if you use multiple algorithms, is Root Mean Squared Error (RMSE). You want this as low as possible, and, in fact, Amazon will choose its winning algorithm mostly on this value.

It will take some time

How long will it take? If you do select AutoML, expect model training to take a while – at least 20 minutes for even the smallest datasets. If your dataset is large, it can take an hour or several hours. The same is true when you generate the actual forecast. So, start it in the beginning of the day so you can work with it before lunch, or near the end of your day so you can look at it in the morning.

Data prep details (for your data engineer)

  • Match the ‘forecast frequency’ to the frequency of your observation timestamps.
  • Set the demand datatype to a float prior to import (it might be an integer).
  • Get comfortable with `striptime` and `strftime` – you have only two options for timestamp format.
  • Assume all data are from the same time zone. If they are not, make them that way. Use python datetime methods.
  • Split out a validation set like this: https://github.com/aws-samples/amazon-forecast-samples/blob/master/notebooks/1.Getting_Data_Ready.ipynb
  • If using pandas dataframes, do not use the index when writing to csv.

Conclusion

If you’re ever asked to produce a forecast or predict some number in the future, you now have a robust method at your fingertips to get there. With Amazon Forecast, you have access to Amazon.com’s optimized algorithms for time series forecasting. If you can get your target data into CSV format, then you can use a forecast. Before you start, have a business goal in mind – it is essential to think about ranges of possibilities rather than a discrete number. And be sure to keep in mind our best practices for creating a forecast, such as using a “medium term” time horizon, visualizing the results, and saving your data preparation code.

If you’re ready to make better, data-driven decisions, trust your dashboards and reports, confidently bring in new sources for enhanced analysis, create a culture of DataOps, and become AI-ready, contact us to schedule a demo of our DataOps Foundation.

-Rob Whelan, Practice Director, Data & Analytics

Facebooktwitterlinkedinmailrss

How to Use Amazon Forecast for your Business

How to use Amazon Forecast: What Is it Good For?

How many times have you been asked to predict revenue for next month or next quarter? Do you mostly rely on your gut? Have you ever been asked to support your numbers? Cue sweaty palms frantically churning out spreadsheets.

Maybe you’ve suffered from the supply chain “bullwhip” effect: you order too much inventory, which makes your suppliers hustle, only to deliver a glut of product that you won’t need to replace for a long time, which makes your suppliers sit idle.

Wouldn’t it be nice to plan for your supply chain as tightly as Amazon.com does? With Amazon Forecast, you can do exactly that. In part one of this two-part article, I’ll provide an overview of the Amazon Forecast service and how to get started. Part two of the article will focus on best practices for using Amazon Forecast.

The backstory

Amazon knows a thing or two about inventory planning, given its intense focus on operations. Over the years, it has used multiple algorithms for accurate forecasting. It even fine-tuned them to run in an optimized way on its cloud compute instances. Forecasting demand is important, if nothing else to get a “confidence interval” – a range where it’s fairly certain reality will fall, say, 80% of the time.

In true Amazon Web Services fashion, Amazon decided to provide its forecasting service for sale in Amazon Forecast, a managed service that takes your time series data in CSV format and spits out a forecast into the future. It gives you a customizable confidence interval that you can set to 95%, 90%, 80%, or whatever percentage you need. And, you can re-use and re-train the model with actuals as they come in.

When you use the service, you can tell it to run up to five different state-of-the-art algorithms and pick a winner. This saves you the time of deliberating over which algorithm to use.

The best part is that you can make the forecast more robust by adding in “related” time series – any data that you think is correlated to your forecast. For example, you might be predicting electricity demand based on macro scales such as season, but also on a micro level such as whether or not it rained that day.

How to use

Amazon Forecast is considered a serverless service. You don’t have to manage any compute instances to use it. Since it is serverless, you can create multiple scenarios simultaneously – up to three at once. There is no reason to do this in series; you can come up with three scenarios and fire them off all at once. Additionally, it is low-cost , so it is worth trying and experimenting with often. As is generally the case with AWS, you end up paying mostly for the underlying compute and storage, rather than any major premium for using the service. Like any other machine learning task, you have a huge advantage if you have invested in keeping your data orderly and accessible.

Here is a general workflow for using Amazon Forecast:

  1. Create a Dataset Group. This is just a logical container for all the datasets you’re going to use to create your predictor.
  2. Import your source datasets. A nice thing here is that Amazon Forecast facilitates the use of different “versions” of your datasets. As you go about feature engineering, you are bound to create different models which will be based on different underlying datasets. This is absolutely crucial for the process of experimentation and iteration.
  3. Create a predictor. This is another way of saying “create a trained model on your source data.”
  4. Create a forecast using the predictor. This is where you actually generate a forecast looking into the future.

To get started, stage your time series data in a CSV file in S3. You have to follow AWS’s naming convention for the column names. You also can optionally use your domain knowledge to enrich the data with “related time series.” Meaning, if you think external factors drive the forecast, you should add those data series, too. You can add multiple complementary time series.

When your datasets are staged, you create a Predictor. A Predictor is just a trained machine learning model. If you choose the “AutoML” option, Amazon will make up to five algorithms compete. It will save the results of all of the models that trained successfully (sometimes an algorithm clashes with the underlying data).

Finally, when your Predictor is done, you can generate a forecast, which will be stored on S3, which can be easily shared with your organization or with any Business Intelligence tool. It’s always a good idea to visualize the results to give them a reality check.

In part two of this article, we’ll dig into best practices for using Amazon Forecast. And if you’re interested in learning even more about transforming your organization to be more data-driven, check out our DataOps Foundation service that helps you transform your data analytics processes.

-Rob Whelan, Practice Director, Data & Analytics

Facebooktwitterlinkedinmailrss

AWS re:Invent 2019: AWS Product/Service Review, a Networking Perspective

Announcements for days!

AWS re:Invent 2019 has come and gone, and now the collective audience has to sort through the massive list of AWS announcements released at the event.  According to the AWS re:Invent 2019 Recap communication, AWS released 77 products, features and services in just 5 days!  Many of the announcements were in the Machine Learning (ML) space (20 total), closely followed by announcements around Compute (16 total), Analytics (6 total), Networking and Content Delivery (5 total), and AWS Partner Network (5 total), amongst others.   In the area of ML, things like AWS DeepComposer, Amazon SageMaker Studio, and Amazon Fraud Detector topped the list.  While in the Compute, Analytics, and Networking space, Amazon EC2 Inf1 Instances, AWS Local Zones, AWS Outposts, Amazon Redshift Data lake, AWS Transit Gateway Network Manager, and Inter-Region Peering were at the forefront. Here at 2nd Watch we love the cutting-edge ML feature announcements like everyone else, but we always have our eye on those announcements that key-in on what our customers need now – announcements that can have an immediate benefit for our customers in their ongoing cloud journey.

All About the Network

In Matt Lehwess’ presentation, Advanced VPC design and new capabilities for Amazon VPC, he kicked off the discussion with a poignant note of, “Networking is the foundation of everything, it’s how you build things on AWS, you start with an Amazon VPC and build up from there. Networking is really what underpins everything we do in AWS.  All the services rely on Networking.” This statement strikes a chord here at 2nd Watch as we have seen that sentiment in action. Over the last couple years, our customers have been accelerating the use of VPCs, and, as of 2018, Amazon VPCs is the number one AWS service used by our customers, with 100% of them using it. We look for that same trend to continue as 2019 comes to an end.  It’s not the sexiest part of AWS, but networking provides the foundation that brings all of the other services together.  So, focusing on newer and more efficient networking tools and architectures to get services to communicate is always at the top of the list when we look at new announcements.  Here are our takes on these key announcements.

AWS Transit Gateway Inter-Region Peering (Multi-Region)

One exciting feature announcement in the networking space is Inter-Region Peering for AWS Transit Gateway.  This feature allows the ability to establish peering connections between Transit Gateways in different AWS Regions.  Previously, connectivity between two Transit Gateways could only be done through a Transit VPC which included the overhead of running your own networking devices as part of the Transit VPC.   Inter-Region peering for AWS Transit Gateway enables you to remove the Transit VPC and connect Transit Gateways directly.

The solution uses a new static attachment type called a Transit Gateway Peering Attachment that, once created, requires an acceptance or rejection from the accepter Transit Gateway.  In the future, AWS will likely allow dynamic attachments, so they advise you to create unique ASNs for each Transit Gateway for the easiest transition.  The solution also uses encrypted VPC peering across the AWS backbone.  Currently Transit Gateway inter-region peering support is available for gateways in US East (Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions with support for other regions coming soon.  You also can’t peer Transit Gateways in the same region.

(Source: Matt Lehwess: Advanced VPC design and new capabilities for Amazon VPC (NET305))

On the surface the ability to connect two Transit Gateways is just an incremental additional feature, but when you start to think of the different use cases as well as the follow-on announcement of Multi-Region Transit Gateway peering and Accelerated VPN solutions, the options for architecture really open up.  This effectively enables you to create a private and highly-performant global network on top of the AWS backbone.  Great stuff!

AWS Transit Gateway Network Manager

This new feature is used to centrally monitor your global network across AWS and on premises. The Transit Gateway network manager simplifies operational complexity of managing networks across regions and remote locations.  This AWS feature is another to take a dashboard approach to provide a simpler overview of your resources that may be spread over several regions and accounts. To use it, you create a Global Network within the tool which is an object in the AWS Transit Gateway Network Manager service that represents your private global network in AWS. It includes your AWS Transit Gateway hubs, their attachments, and on-premises devices, sites, and links.  Once the Global Network is created, you extend the configuration by adding in Transit Gateways, information about your on-premises devices, sites, links, and the Site-to-Site VPN connections with which they are associated, and start using it to visualize and monitor your network. It includes a nice geographic world map view to visualize VPNs (if they’re up/down impaired) or Transit Gateway Peering connections.

https://d1.awsstatic.com/re19/gix/gorgraphic.cdb99cd59ba34015eccc4ce5eb4b657fdf5d9dd6.png

There’s also a nice Topology feature that shows VPCs, VPNs, Direct Connect gateways, and AWS Transit Gateway-AWS Transit Gateway peering for all registered Transit gateways.  It provides an easier way to understand your entire global infrastructure from a single view.

Another key feature is the integration with SD-WAN providers like Cisco, Aviatrix, and others. Many of these solutions will integrate with AWS Transit Gateway Network Manager and automate the branch-cloud connectivity and provide end-to-end monitoring of the global network from a single dashboard. It’s something we look forward to exploring with these SD-WAN providers in the future.

AWS Local Zones

AWS Local Zones in an interesting new service that addresses challenges we’ve encountered with customers.  Although listed under Compute and not Networking and Content Delivery on the re:Invent 2019 announcement list, Local Zones is a powerful new feature with networking at its core.

Latency tolerance for applications stacks running in a hybrid scenario (i.e. app servers in AWS, database on-prem) is a standard conversation when planning a migration.  Historically, those conversations would be predicated by their proximity to an AWS region.  Depending on requirements, customers in Portland, Oregon may have the option to run a hybrid application stack, where those in Southern California may have been excluded.  The announcement of Local Zones (initially just in Los Angeles) opens up those options to markets that were not previously available.  I hope this is the first of many localized resource deployments.

That’s no Region…that’s a Local Zone

Local Zones are interesting in that they only have a subset of the services available in a standard region.  Local Zones are organized as a child of a parent region, notably the Los Angeles Local Zone is a child of the Oregon Region.  API communication is done through Oregon, and even the name of the LA Local Zone AZ maps to Oregon (Oregon AZ1= us-west-2a, Los Angeles AZ1 = us-west-2-lax-1a).  Organizationally, it’s easiest to think of them as remote Availability Zones of existing regions.

As of December 2019, only a limited amount of services are available, including EC2, EBS, FSx, ALB, VPC and single-zone RDS.  Pricing seems to be roughly 20% higher than in the parent region.  Given that this is the first Local Zone, we don’t know whether this will always be true or if it depends on location.  One would assume that Los Angeles would be a higher-cost location whether it was a Local Zone or full region.

All the Things

To see all of the things that were launched at re:Invent 2019 you can check out the re:Invent 2019 Announcement Page. For all AWS announcements, not just re:Invent 2019 launches (e.g. Things that launched just prior to re:Invent), check out the What’s New with AWS webpage. If you missed the show completely or just want to re-watch your favorite AWS presenters, you can see many of the re:Invent presentations on the AWS Events Youtube Channel. After you’ve done all that research and watched all those videos and are ready to get started, you can always reach out to us at 2nd Watch. We’d love to help!

-Derek Baltazar, Managing Consultant

-Travis Greenstreet, Principal Architect

Facebooktwitterlinkedinmailrss

Serverless Aurora – Is it Production-Ready Yet?

In the last few months, AWS has made several announcements around it’s Aurora offering such as:

All of these features work towards the end goal of making serverless databases a production-ready solution. Even with the latest offerings, should you explore migrating to a serverless architecture? This blog highlights some considerations when looking to use Backend-as-a-Services (BaaS) at your data layer.

Let’s assume that you’ve already either made the necessary schema changes and have migrated already or have a general familiarity of implementing a new database with Aurora Classic. Aurora currently comes in two models -Provisioned and Serverless Aurora. A traditional AWS database that is provisioned either has a self-managed EC2 instance or operates as a PAAS model using an AWS managed RDS instance. In both use cases, you have to allocate memory and CPU in addition to creating security groups to allow applications to listen on a TCP connection string.

In this pattern, issues can arrive right at the connection. There are limits as to how many connections can access a database before you start to see performance degradation or an inability to connect altogether when the limit is maxed out. In addition to that, your application may also receive varying degrees of traffic (e.g., a retail application used during a peak season or promotion). Even if you implement a caching layer in front, such as Memcache or Redis, you still have scenarios where the instance will eventually either have to scale vertically to a more robust instance or horizontally with replicas to distribute reads and writes.

This area is where serverless provides some value. It’s worth recalling that a serverless database does not equal no servers. There are servers there, but that is abstracted away from the user (or in this case the application). Following recent compute trends, Serverless focuses more on writing business logic and less on infrastructure management and provisioning to deploy from the requirements stage, to prod-ready quicker. In the traditional database model, you are still responsible for securing the box, authentication, encryption, and other operations unrelated to the actual business functions.

How Aurora Serverless works

What serverless Aurora provides to help alleviate issues with scaling and connectivity is a Backend as a Service solution. The application and Aurora instance must be deployed in the same VPC and connect through endpoints that go through a network load balancer (NLB). Doing so allows for connections to terminate at the load balancer and not at the application.

By abstracting the connections, you no longer have to create logic manage load balancing algorithms or worry about making DNS changes to facilitate for database endpoint changes. The NLB has routing logic through request routers that make the connection to whichever instance is available at the time, which then maps to the underlying serverless database storage. If the serverless database needs to scale up, a pool of resources is always available and kept warm. In the event the instances scale down to zero, a connection cannot persist.

By having an available pool of warm instances, you now have a pay-as-you-go model where you pay for what you utilize. You can still run into the issue of max connections, which can’t be modified, but the number allowed for smaller 2 and 4 ACU implementations has increased since the initial release.

Note: Cooldowns are not instantaneous and can take up to 5 mins after the instance is entirely idle, and you are still billed for that time. Also, even though the instances are kept warm, the connection to those instances still has to initiate. If you make a query to the database during that time, you can see wait times of 25 seconds or more before the query fully executes.

Cost considerations:

Can you really scale down completely? Technically yes, if certain conditions are made:

  • CPU below 30 percent utilization
  • Less than 40 percent of connections being used

To achieve this and get the cost savings, the database must be completely idle. There can’t be long-running queries or locked tables. Also, varying activities outside of the application can generate queries such as open sessions, monitoring tools, health-checks, so on and so forth. The database only pauses when the conditions are met, AND there is zero activity.

Serverless Aurora at .06/VCU starts at a higher price than its provisioned predecessor at .041. Aurora classic also charges hourly, where Serverless Aurora charges by the second with a 5-minute minimum AND a 5-minute cool-down period. We already discussed that cool-downs in many cases are not instantaneous, and now you pile on that billing doesn’t stop until an additional 5 minutes after that period. If you go with the traditional minimal setup of 2 VCU and never scale down the instances, the cost is more expensive by a magnitude of at least 3x. Therefore, to get the same cost payoff, your database would have to run only 1/3 of the time and can be achievable for dev/test boxes that are parked or apps only used during business hours in a single time-zone. Serverless Aurora is supposed to be highly available by default, so if you are getting two instances at this price point, then you are getting a better bargain performance-wise than running a single, provisioned instance for an only slightly higher price point.

Allowing for a minimum of 1ACU allows you the option of fully scaling down to a serverless database and makes the price point more comparable to RDS without enabling pausing.

Migration

Migrating to Serverless Aurora is relatively simple as you can just load in a snapshot from an existing database.

Data API

With Data API, you no longer need a persistent connection to query the database. In previous scenarios, a fetch could take 25 seconds or more if the query is executed after a cool-down period. In this scenario, you can query the serverless database even if it’s been idle for some time. You can leverage a Lambda function via API gateway which works around the VPC implementation. AWS has mentioned it is providing performance metrics around the time it takes on average to execute a query with data API in the next coming months.

Conclusion

With the creation of EC2, Docker, and Lambda functions, we’ve seen more innovation in the area of compute and not as much on the data layer. Traditional provisioned relational databases have difficulties scaling and have a finite limit on the number of connections. By eliminating the need for an instance, this level of abstraction presents a strong use case for unpredictable workloads. Kudos to AWS for engineering a solution at this layer. The latest updates these last few months embellish AWS’ willingness to solve complex problems. Running 1ACU does bring the cost down to a rate comparable to RDS while providing a mechanism for better performance if you disable pauses. However, while it is now possible to run Aurora serverless 24/7 more cost-effectively, this scenario contrasts their signature use case of having an on/off database.

Serverless still seems a better fit for databases that are rarely used and only see spikes on occasion or applications primarily used during business hours. Administration time is still a cost, and serverless databases, despite the progress, still has many unknowns. It can take an administrator some time and patience to truly get a configuration that is performant, highly available, and not overly expensive. Even though you don’t have to rely on automation and can manually scale your Aurora serverless cluster, it takes some effort to do so in a way that doesn’t immediately terminate the connections. Today, you can leverage ECS or Fargate with spot instances and implement a solution that yields similar or better results at a cheaper cost if a true serverless database is the desired goal. I would still recommend this for dev/test workloads and see if you can work your way up to prod for smaller workloads as the tool still provides much value. Hopefully, AWS releases GA offerings for MySQL 5.7 and Postgres soon.

Want more tips and info on Serverless Aurora or serverless databases? Contact our experts.

-Sabine Blair, Cloud Consultant

Facebooktwitterlinkedinmailrss

AWS re:Invent 2018: Daily Recap – Wednesday

Every year AWS re:Invent gets bigger and better. There are more people attending and even more who will participate remotely than any previous year. There are also more vendors showing the strength of the AWS ecosystem.

You realized why when Andy Jassy started his keynote session Wednesday morning.  The growth rate of AWS is phenomenal.  Adoption is up, revenues are up and AWS responds with customer-driven changes. Three years ago, there were less than 100 AWS services out here, and now, with yesterday’s announcements, there are more than 140. Jassy discussed a lot at the keynote, but the focus was on three major themes:

Storage/Database

The first theme was around Storage/Database with services such as Amazon FSx, which provides a platform for such things as FSx for Windows File Server. This is like Amazon EFS, but instead of supporting the NFS protocol it supports the SMB protocol. For those running workloads on Windows, you now have a shared filesystem. If you need a file system for High Performance Computing cluster, then FSx supports Lustre. I would look for more protocols and services in the future.

FSx was just the tip of the iceberg with new options DynamoDB Read/Write Capacity On Demand, another storage tier for Glacier called Deep Archive, a time-oriented database named Timestream, a fully managed ledger database – QLDB and even a Managed Blockchain service.  Read more about these from AWS:

Glacier Deep Archive
Amazon FSx for Windows File Servers
Amazon FSx for Lustre
DynamoDB Read/Write Capacity On Demand
Amazon Timestream
Amazon Quantum Ledger Database
Amazon Managed Blockchain

Security

The second theme was around Security.  It surprises no one that AWS is always expanding their offerings in this space.  They are fond of saying that security is Job One at AWS.  Two interesting announcements here were AWS Control Tower and AWS Security Hub. These will assist in many aspects of managing your AWS accounts and increasing your security posture across your entire AWS account footprint.

Machine Learning/Artificial Intelligence

The final theme was around Machine Learning/Artificial Intelligence. We see a lot of effort being put into AWS’ Machine Learning and Artificial Intelligence solutions. This shows with the number of announcements this year. New Sagemaker offerings, Elastic Inference, and even their own specialized chip all point to a focus in this area.

Amazon Elastic Inference
AWS Inferentia
Amazon SageMaker Ground Truth
AWS Marketplace for machine learning
Amazon SageMaker RL
AWS DeepRacer

Amazon Textract
Amazon Personalize
Amazon Forecast

And we can’t forget the cool toy of the show – DeepRacer. Like Amazon DeepLens from last year, this “toy” car will help you explore machine learning. It has sensors and compute onboard, so you can teach it how to drive. There’s even a DeepRacer League, where you can compete for a trophy at AWS re:Invent 2019!

Outposts

Although not one of the three main themes, and not available until 2019, AWS Outposts was another exciting feature yesterday. Want to run your own “region” in your datacenter? Take a look at this. It is fully-managed, maintained and supported infrastructure for your datacenter. It comes in two variants – 1) VMware Cloud on AWS Outposts, which allows you to use the same VMware control plane and APIs you use to run your infrastructure and, 2) AWS native variant of AWS Outposts allows you to use the same exact APIs and control plane you use to run in the AWS cloud, but on-premises.

If you can’t come to the cloud, it can come to you.

Sessions and Events

There are more sessions than ever at this year’s re:Invent, and the conference agenda is full of interesting and useful events and demos. It’s always great to know that, even if you missed a session, you can stream it on-demand later on the AWS re:Invent YouTube channel. And we can’t forget the expo hall, which has been very heavily-trafficked. If you haven’t yet, stop by and see 2nd Watch in booth 2440. We’re giving away one more of those awesome Amazon DeepLens cameras we mentioned earlier in this post. This year’s re:Invent shows that AWS is bigger and better than ever!

David Nettles – Solutions Architect

Facebooktwitterlinkedinmailrss