As an Electrical Engineer, I always look forward to hearing from Amazon’s Sr. VP of Global Infrastructure, Peter DeSantis. Data Science and Machine Learning may be the Chanel and Dior of cloud, but without the ground up infrastructure and processors, they’d trip on their heels and end up as runway fashion roadkill. The infrastructure and processors are the bedrock upon which all AWS services, and customer trust, are built.
DeSantis spoke at length about the ongoing improvements in their data centers and their impact on high availability and resilience – specifically their custom developed switching gear control systems and custom designed, rack installed UPS.
Most relevant to all AWS customers, he gave eagerly awaited details about the newest Graviton2 processors. This CPU was designed for running applications at scale in the cloud. They provide 40% better price performance over x86 based instances and deliver 7x more performance, 4x more computer cores, 5x faster memory and 2x larger cache. They also deliver additional security with always-on 256-bit DRAM encryption and faster per core encryption performance. They support encrypted EBS storage volumes by default.
Finally, he demonstrated Amazon’s core principles of corporate citizenship and global stewardship through its immense investment in renewable energy and combating climate change.
Although not every re:Invent attendee is interested in AMS’ custom switchgear control system or Neoverse core powered electronic design automation, there was an unpolished gem of a takeaway that applies to most customers – the cost/benefit advantages of buy versus build. AWS spends as much time, expense and effort designing and redesigning their infrastructure for performance, sustainability and operational simplicity as they do so that we don’t need to.
For companies moving to the cloud, especially those for whom technology is not their core business, a CIO who suggests they should build and manage their own datacenters might soon find themselves “deciding to spend more time with their family.” By extension, managers, executives and technologists who fail to give proper consideration to the value of letting AWS do the heavy lifting further up the stack do so at their own peril and at the detriment to the progress and success of their company.
In the Machine Learning Keynote, Dr. Swami Sivasubramanian, Amazon’s VP of Machine Learning, spoke on the current state of Machine Learning as a field and how AWS is leading the charge for innovation in it. A task only slightly less difficult than capturing the dynamics of CryptoCurrency speculation in under an hour. Here are some of his key product and service announcements.
ML Frameworks and Infrastructure
AWS announced AWS Inferentia, a high performance, machine learning chip that powers EC2 Inf1 instances. Inferentia boasts 45% lower costs and 30% higher throughput than comparable GPU based instances, and helps Alexa achieve 25% lower end to end latency. AWS Tranium is another high-performance machine learning chip with the most teraflops of compute power for ML that enables a broader set of ML applications.
AWS had several announcements around Amazon SageMaker:
“Thus, we need a platform where the data scientist will be able to leverage his existing skills to engineer and study data, train and tune ML models and finally deploy the model as a web-service by dynamically provisioning the required hardware, orchestrating the entire flow and transition for execution with simple abstraction and provide a robust solution that can scale and meet demands elastically.” – Jojo John Moolayil, AWS AI Research Scientist
- SageMaker Data Wrangler is a faster way to prepare data for ML without a single line of code.
- SageMaker Clarify provides machine learning developers with greater visibility into their training data and models so they can identify and limit bias and explain predictions.
- SageMaker Debugger helps identify bottlenecks, visualize system resources like GPU, CPU, I/O, memory and provides adjustment recommendations.
The most important take-away from this keynote is AWS’ goal of the democratization of machine learning, or the transparent embedding of ML functionality into other AWS services.
“The company’s overall aim is to enable machine learning to be embedded into most applications before the decade is out by making it accessible to more than just experts.” – Andy Jassy, AWS CEO
With that goal in mind, AWS announced Redshift ML, which imports trained models into the data warehouse and makes them accessible using standard SQL queries. Use SQL statements to create and train Amazon SageMaker machine learning models using your Redshift data and embed them directly in reports.
Aurora ML enables you to add ML-based predictions to applications via the familiar SQL programming language, so you don’t need to learn separate tools or have prior machine learning experience. It provides simple, optimized, and secure integration between Aurora and AWS ML services without having to build custom integrations or move data around.
Neptune ML brings predictions to their fully managed graph database service in the form of graph neural networks and the Deep Graph Library.
For companies involved with handling medical data, Amazon Healthlake is worth looking at. With built-in data query, search and ML capabilities, you can seamlessly transform data to understand meaningful \ medical information at petabtye scale.
I hope you enjoy week 3 of the conference and will join us for the week 3 recap, as well as an overall conference recap, next week here on our blog!
-Gregory Tasonis, Sr. Cloud Consultant