Welcome to Season 2 of Cloud Crunch! We’re back and better than ever with all new topics, opinions and an all-star lineup of exciting industry expert guests. We kick off our first episode looking back at AWS re:Invent 2020 with Solutions Architect pro and re:Invent veteran, Joe Conlin. Being a 3-week event, there is a lot to cover, so get the scoop on what AWS announced, what we found exciting, and what you might’ve missed. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.
Thanks for listening! If there are issues or topics you’d like to hear about, we’d love to hear your suggestions. Email us at cloudcrunch@2ndwatch.com.
AWS re:Invent 2020 was a little different, to say the least. The line for the restroom was way shorter, if you entered the Tatonka Challenge you were guaranteed to win or at least have a really good shot at the trophy depending on how many of your family members joined in and how many wings your air fryer can handle, and the lines for shuttle busses were non-existent as the commute time between sessions was reduced from hours to seconds depending on how fast you can click. However, in typical AWS fashion, they made lemonade out of lemons and put on one of the best public cloud virtual event of the year.
Instead of the typical action packed, sleepless week in Vegas, AWS broke it up in to 3 weeks sprinkled with all of their major announcements throughout. Vendors set up to provide breakout sessions and virtual booths to discuss solutions/products and have 1 on 1 sessions with potential leads via chat and live demos. Hunters of the precious SWAG had to engage with vendors as well as participate in specific activities to obtain their various rewards. With all of the turmoil going on in the world, AWS was still able to announce over 140 new products and features at re:Invent 2020. Here are just a few of the highlights.
Week 1
For the first time ever, re:Invent was opened to the world free of charge and attracted over 500,000 participants. Andy Jassy’s overall keynote theme was centered around the customer driving innovation within AWS based on solving their needs. In part due to the pandemic, cloud adoption has accelerated this year and has fueled AWS’ continued growth.
AWS announced new compute innovations including MacOS (literally integrating a Mac Mini into a server chassis) as well as making tremendous investments in the processor space with their Graviton 2 processors and Trainium chips. If you didn’t catch week 1, here’s what you missed:
New C6g Graviton instance announced, almost 50% savings
Lower cost for AWS Inferentia, used by Alexa
Habana Gaudi Based EC2 Instances GPU based, machine learning instances
AWS Trainium AWS ML chip used in EC2 and Sagemaker
Reinventing Storage:
Gp3 for EBS allowing 4x peak throughput
Io2 Block Express First SAN built for cloud
The mindset of “100% in the cloud all the time” is slowly being shifted to include new options for hybrid environments with the announcements of ECS and EKS anywhere allowing customers to run their workloads in their own data center. Taking it a step further is the announcement of AWS Monitron that uses machine learning to help predict failures in data center infrastructure. Placing compute closer to the customer (Edge Computing) has become more important especially as connectivity providers roll out 5G. To allow for this evolution, AWS has released AWS Wavelength. Also, additional options for Outposts (1 and 2U server sizes) have been released for customers not requiring a full cabinet of hardware.
Data science, AI, and machine learning have become front and center as customers continue to take advantage of cloud native technologies. Making the best use of your data and making it work for your business have been a huge focus this year. Some of the highlights include:
Amazon SageMaker Data Wrangler: Clean and aggregate data to prepare it for machine learning.
AWS Glue Elastic Views: Easily combine and replicate data from different data stores.
Amazon Code Guru: Automate code reviews and identify your most expensive lines of code.
Amazon DevOps Guru: automatically detect operational issues and recommend actions to fix
Amazon Quicksight: Ask any question in natural language and get answers in seconds.
Amazon Connect Wisdom: Reduces the time agents spend finding answers for customers.
AWS partner relationships continue to be a central focus as well, and this was highlighted by Doug Yeum in his keynote:
Cohesity DMaaS (Data Management as a Service) service announcement.
AWS SaaS Boost: Open source SaaS reference environment to accelerate traditional applications to SaaS on AWS.
AWS ISC Partner path: More access to millions of active AWS customers with AWS field sellers globally.
Managed entitlements for AWS Marketplace: Automate 3rd party software license distribution and simplify entitlement tracking.
AWS Service Catalog App Registry: Define and associate resources to better manage applications.
AWS Energy Competency: Helping customers accelerate their transition to a more balanced and sustainable energy future.
Week 2
Kicking off week two was an infrastructure specific deep dive with Peter DeSantis. Given my background in the data center space, I found his keynote to be extremely interesting as I have noticed over the past few years that questions and conversations around how cloud services are actually provided are very common. Back before the “cloud” and even virtual machines existed, servers were deployed into data centers and enterprises ran their mission critical workloads on them. Some companies deployed and managed their own physical infrastructure, some outsourced the management of those environments to MSPs, but the overall principals have not changed over the years. Yes, your workloads run “in the cloud” but behind that are still data centers housing servers, networking gear, storage, cooling, water chillers, power distribution, connectivity, etc.
AWS has taken those principals and scaled them to another level and has been focusing on redundancy and sustainability to ensure that, if built properly, their customers’ workloads have no single point of failure and can keep running should an outage occur. AWS has not only made strides in the disk storage and processor space, but they have also designed and integrated their own switching gear control systems and custom designed, rack installed UPS infrastructure.
These are items that users of the cloud don’t have to deal with and one of the major selling points of moving to cloud. You don’t have to worry about rack space, power, cooling, hardware purchases, maintenance contracts, and the list goes on and on. BUT rest assured that the man behind the curtain is very aware of these items and is taking best in class steps to ensure that the infrastructure behind the scenes is always on.
Next on the list was the machine learning Keynote with Swami Sivasubramanian. This was more of a deep dive into some of the announcements made by Andy Jassy in week one, and he did not disappoint. As customers continue the shift to cloud native, ML and AI services have become front and center in their Application Modernization journey. Out of the 250+ new products and product enhancements announced by AWS in 2020, most of those were centered around SageMaker and 11 other AI and ML products.
ML Frameworks and Infrastructure
AWS announced AWS Inferentia, a high performance, machine learning chip that powers EC2 Inf1 instances. Inferentia boasts 45% lower costs and 30% higher throughput than comparable GPU based instances and helps Alexa achieve 25% lower end to end latency. AWS Tranium is another high-performance machine learning chip with the most teraflops of compute power for ML that enables a broader set of ML applications.
Amazon SageMaker
AWS had several announcements around Amazon SageMaker.
“Thus, we need a platform where the data scientist will be able to leverage his existing skills to engineer and study data, train and tune ML models and finally deploy the model as a web-service by dynamically provisioning the required hardware, orchestrating the entire flow and transition for execution with simple abstraction and provide a robust solution that can scale and meet demands elastically.” – Jojo John Moolayil, AWS AI Research Scientist
SageMaker Data Wrangler is a faster way to prepare data for ML without a single line of code.
SageMaker Clarify provides machine learning developers with greater visibility into their training data and models so they can identify and limit bias and explain predictions.
SageMaker Debugger helps identify bottlenecks, visualize system resources like GPU, CPU, I/O, memory and provides adjustment recommendations.
AI Services:
The most important take-away from this keynote is AWS’ goal of the democratization of machine learning, or the transparent embedding of ML functionality into other AWS services.
“The company’s overall aim is to enable machine learning to be embedded into most applications before the decade is out by making it accessible to more than just experts.” – Andy Jassy, AWS CEO
With that goal in mind, AWS announced Redshift ML, which imports trained models into the data warehouse and makes them accessible using standard SQL queries. Use SQL statements to create and train Amazon SageMaker machine learning models using your Redshift data and embed them directly in reports.
Aurora ML enables you to add ML-based predictions to applications via the familiar SQL programming language, so you don’t need to learn separate tools or have prior machine learning experience. It provides simple, optimized, and secure integration between Aurora and AWS ML services without having to build custom integrations or move data around.
Neptune ML brings predictions to their fully managed graph database service in the form of graph neural networks and the Deep Graph Library.
For companies involved with handling medical data, Amazon Healthlake is worth looking at. With built-in data query, search and ML capabilities, you can seamlessly transform data to understand meaningful \ medical information at petabtye scale.
Week 3
Wrapping up the final week of re:Invent 2020 was Werner Vogels rocking his typical iconic t-shirt, however not announcing who would be playing at re:Play this year, unfortunately. Presenting from the Netherlands in the historic SugarCity factory, he masterfully wove in the story of transforming and adapting to external events. To say that COVID has impacted all aspects of our lives in 2020 would be an understatement, but when presented with challenges, innovators continue to find ways to overcome those obstacles.
Collaboration and remote working were beyond challenging to everyone in 2020. AWS CloudShell was announced to provide users access to AWS critical resources such as the AWS console, AWS CLI and even 1GB of persistent storage at no cost. In addition, enhancements to AWS Cloud9 were announced that enables users to develop, run, and debug code from a browser.
To help mitigate potential issues in the future, AWS announced Fault Injection Simulator that sounds more like a load test on steroids utilizing chaos engineering. Chaos engineering allows an application or environment to be pushed to its limits to highlight any potential issues, bottle necks, or failures before they are pushed into production for end user use.
Additionally, Werner focused on helping the community and sustainability. The pandemic has financially hurt millions of people and AWS has developed the re:Start program designed to help the unemployed develop new skills that will allow them to pursue new career paths.
In summary, AWS continues to dominate the public cloud market and rapidly innovates based on their customer requirements. We may not have been standing elbow to elbow with 60,000 of our closest friends, navigating the miles and miles of casino floors, or enjoying all of the surprises of re:Invent in-person this year, but AWS did a stellar job of bringing us together virtually. Hopefully in a year’s time, we will all be back together and enjoying the wonderful craziness that is AWS re:Invent, Vegas style.
Week 3 of AWS re:Invent 2020 saw Werner Vogels’ much anticipated keynote, and it was worth the wait. He masterfully wove in the story of transforming and adapting to external events couched in a number of announcements, with an overall theme focused on how organizations can transform quickly and the advantage that brings. Here is your week 3 recap.
The third week of re:Invent saw Werner Vogels’ much anticipated keynote, and it was worth the wait. Dr. Vogels began by doing a little story telling about the location from where he was broadcasting the keynote: A now defunct sugar factory called SugarCity. From here he was able to set up his next guest speakers, Lea von Bidder, CEO of Ava, and Nicole Yip, Engineering Manager of Direct Shopper Technology for the LEGO Group. Their driving message, which has been an ongoing theme for the keynote series, was becoming an organization that can transform quickly to changing markets and environments. This should be an important driver for all organizations given this past year.
I love the storytelling immensely, but let’s be honest here, the new product announcements are what we’re looking for. This year didn’t fail to deliver.
AWS CloudShell
AWS CloudShell seems really neat, and I can’t wait to give it a try. CloudShell gives us the ability to utilize a fully AWS enabled shell, launched right from the UI. From here, we can use all our favorite AWS cli commands _without having to setup any credentials_. This will give administrators another layer of security, and still allow users the ability to interact with AWS from the command line.
AWS Fault Injection Simulator (coming in 2021)
Ahhhh… Good old chaos! The announcement of AWS Fault Injection Simulator is a direct result of AWS giving its customers the ability to transform or change to external inputs. This new tool will allow teams to quickly setup chaos experiments across a swath of AWS resources. Injecting simulated faults, like API latency, or a server going down helps teams find gaps in their systems so they can properly be addressed. Don’t be afraid of chaos!
Amazon Managed Service for Prometheus and Amazon Managed Service for Grafana
The popular open source tools Prometheus, a monitoring and alerting system, and Grafana, a visualization platform that works beautifully with Prometheus, allows teams the monitoring, alerting, and _analysis_ every modern organization needs. Now that there’s a managed offering, teams can get the benefits of the tools without the administration hassle. You’ll also get scaling, security and redundancy as you’d expect from a managed service. Monitor everything!
Amazon Quantum Solutions Lab
The final announcement from the keynote was a note about quantum computing, and how this technology has the potential to change, not just computing, but science itself. Always keeping an eye on the future, Dr. Vogels announced Amazon Quantum Solutions Lab, where you can engage in research that will help you identify possible applications of the technology in your business. This is an exciting step in the direction of quantum technologies!
The third week of re:Invent was great, and there are a ton of new services and tools coming out of this year’s virtual conference. 2021 is already shaping up to be a cloud computing blast. Check back here next week for a full conference recap and highlights, take a break for the holidays, and come back again in January for our Top AWS Products of 2020 report and our 2021 cloud predictions.
Snoop has a message for all AWS re:Invent 2020 attendees. Watch his Cameo, then visit our re:Invent 2020 sponsor page to watch our breakout session, “Reality Check: Moving the Data Lake from Storage to Strategic” for your chance to win a Sony PlayStation 5! Add a little extra merry to your holiday season, with 2nd Watch.
We’re back to recap the insights from week 2 of AWS re:Invent. First up was Peter DeSantis, AWS Sr. VP of Global Infrastructure, with the Infrastructure keynote. Desantis spoke at length about the ongoing improvements in their datacenters and their impact on high availability and resilience. Then, in the Machine Learning keynote, Dr. Swami Sivasubramanian, Amazon’s VP of Machine Learning, spoke on the current state of machine learning as a field and how AWS is leading the charge for innovation. Watch to see what you missed last week.
As an Electrical Engineer, I always look forward to hearing from Amazon’s Sr. VP of Global Infrastructure, Peter DeSantis. Data Science and Machine Learning may be the Chanel and Dior of cloud, but without the ground up infrastructure and processors, they’d trip on their heels and end up as runway fashion roadkill. The infrastructure and processors are the bedrock upon which all AWS services, and customer trust, are built.
DeSantis spoke at length about the ongoing improvements in their data centers and their impact on high availability and resilience – specifically their custom developed switching gear control systems and custom designed, rack installed UPS.
Most relevant to all AWS customers, he gave eagerly awaited details about the newest Graviton2 processors. This CPU was designed for running applications at scale in the cloud. They provide 40% better price performance over x86 based instances and deliver 7x more performance, 4x more computer cores, 5x faster memory and 2x larger cache. They also deliver additional security with always-on 256-bit DRAM encryption and faster per core encryption performance. They support encrypted EBS storage volumes by default.
Finally, he demonstrated Amazon’s core principles of corporate citizenship and global stewardship through its immense investment in renewable energy and combating climate change.
Although not every re:Invent attendee is interested in AMS’ custom switchgear control system or Neoverse core powered electronic design automation, there was an unpolished gem of a takeaway that applies to most customers – the cost/benefit advantages of buy versus build. AWS spends as much time, expense and effort designing and redesigning their infrastructure for performance, sustainability and operational simplicity as they do so that we don’t need to.
For companies moving to the cloud, especially those for whom technology is not their core business, a CIO who suggests they should build and manage their own datacenters might soon find themselves “deciding to spend more time with their family.” By extension, managers, executives and technologists who fail to give proper consideration to the value of letting AWS do the heavy lifting further up the stack do so at their own peril and at the detriment to the progress and success of their company.
In the Machine Learning Keynote, Dr. Swami Sivasubramanian, Amazon’s VP of Machine Learning, spoke on the current state of Machine Learning as a field and how AWS is leading the charge for innovation in it. A task only slightly less difficult than capturing the dynamics of CryptoCurrency speculation in under an hour. Here are some of his key product and service announcements.
ML Frameworks and Infrastructure
AWS announced AWS Inferentia, a high performance, machine learning chip that powers EC2 Inf1 instances. Inferentia boasts 45% lower costs and 30% higher throughput than comparable GPU based instances, and helps Alexa achieve 25% lower end to end latency. AWS Tranium is another high-performance machine learning chip with the most teraflops of compute power for ML that enables a broader set of ML applications.
Amazon SageMaker
AWS had several announcements around Amazon SageMaker:
“Thus, we need a platform where the data scientist will be able to leverage his existing skills to engineer and study data, train and tune ML models and finally deploy the model as a web-service by dynamically provisioning the required hardware, orchestrating the entire flow and transition for execution with simple abstraction and provide a robust solution that can scale and meet demands elastically.” – Jojo John Moolayil, AWS AI Research Scientist
SageMaker Data Wrangler is a faster way to prepare data for ML without a single line of code.
SageMaker Clarify provides machine learning developers with greater visibility into their training data and models so they can identify and limit bias and explain predictions.
SageMaker Debugger helps identify bottlenecks, visualize system resources like GPU, CPU, I/O, memory and provides adjustment recommendations.
AI Services:
The most important take-away from this keynote is AWS’ goal of the democratization of machine learning, or the transparent embedding of ML functionality into other AWS services.
“The company’s overall aim is to enable machine learning to be embedded into most applications before the decade is out by making it accessible to more than just experts.” – Andy Jassy, AWS CEO
With that goal in mind, AWS announced Redshift ML, which imports trained models into the data warehouse and makes them accessible using standard SQL queries. Use SQL statements to create and train Amazon SageMaker machine learning models using your Redshift data and embed them directly in reports.
Aurora ML enables you to add ML-based predictions to applications via the familiar SQL programming language, so you don’t need to learn separate tools or have prior machine learning experience. It provides simple, optimized, and secure integration between Aurora and AWS ML services without having to build custom integrations or move data around.
Neptune ML brings predictions to their fully managed graph database service in the form of graph neural networks and the Deep Graph Library.
For companies involved with handling medical data, Amazon Healthlake is worth looking at. With built-in data query, search and ML capabilities, you can seamlessly transform data to understand meaningful \ medical information at petabtye scale.
I hope you enjoy week 3 of the conference and will join us for the week 3 recap, as well as an overall conference recap, next week here on our blog!
AWS re:Invent 2020 kicked off virtually last week, and there’s a lot to unpack! Andy Jassy’s much-anticipated keynote focused on the increased need for rapid iteration and transformation, with product and service announcements that did not disappoint. Here is your recap of week 1 at AWS re:Invent 2020.
AWS re:Invent kicked off virtually with registration free and open to everyone for the first time this year. Andy Jassy’s much-anticipated keynote, as well as the Partner Keynote with Doug Yeum, were the featured content for the first week. Both stressed the increased need for rapid iteration and transformation.
With the announcements of ECS and EKS Anywhere, AWS itself has seemingly begun to transform from attempting to be all things to all customers, to having a more multi-cloud approach. These two services are in the same vein as Anthos on Google Cloud Platform and Arc on Microsoft Azure. All of these services allow customers to run containers in the environments of their choosing. AWS announced S3 Strong Concurrency, which also brings them in line with GCP and Azure. Andy Jassy did make sure to differentiate Amazon from Microsoft by specifically calling them out as incumbents that customers ”are fed up with and sick of.” This was part of the announcement for Bablefish, an open-source MSSQL to PostgreSQL translator.
Approximately 30 products and features were announced in these two keynotes, but the one that will impact almost every AWS customer is EBS gp3. Gp3 allows you to “provision performance apart from capacity.” On gp2, you can get increased performance by increasing the size of the volume. By switching EBS volumes to the new 7th generation gp3 volume types, customers can provision IOPS separately and pay only for the volume size they need. These new volumes are apparently faster and cheaper than gp2 in every way. Most customers are planning to switch all supported volumes to gp3 as soon as possible, but there are still some concerns about support for boot volumes.
Another important storage announcement was io2 Block Express volumes, which can provide up to 256K IOPS and 4000 Mbps of throughput. Some customers have been waiting for a cloud storage solution that can compete with Storage Area Networks they have used in on-premises environments, but, as AWS critic Corey Quinn pointed out, two of these volumes transferring at that throughput across two Availability Zones would cost somewhere in the neighborhood of a dollar per second.
Amazon SageMaker Pipelines, Feature Store, and Data Wrangler (not to be confused with aws-data-wrangler by AWS Labs) were also announced. These tools will be welcomed by companies that need to regularly clean data, store/retrieve associated metadata, and consistently re-deploy Machine Learning models. QuickSight Q was demoed and seems to be a natural language processing marvel. It can retrieve QuickSight Business Intelligence without pre-defined data models.
AWS Glue has been enhanced with materialized “Elastic” views that can be created with traditional SQL and will replicate across multiple data stores. The Elastic Views are serverless and will monitor source data for changes and update regularly.
The big Partner Keynote highlight was the introduction of the Amazon RDS Delivery Partner program as part of the AWS Service Delivery Program. Now customers can easily find partners with the database expertise to ensure Disaster Recovery, High-availability, Cost Optimization, and Security.
There is a lot more to come in the following weeks of re:Invent, and we look forward to doing deeper dives on all of these announcements here on our blog and in our podcast, Cloud Crunch! Check back next week for a Week 2 recap.
Bo Jackson has a message for all AWS re:Invent 2020 attendees. Watch his Cameo, then visit our re:Invent 2020 sponsor page to get your free 2nd Watch sweatpants.